Martin Weller has written some excellent posts on disruption and disruptive innovation. In his most recent blog post on disruption and the unenlightenment he argues that “knowledge of any area itself is viewed as a reason not to trust someone.” I’ve come across this myself, or more critically I’ve seen others placing a higher value upon knowledge which is unencumbered by context, so for example in our own environment having business acumen is treated with higher value than having knowledge of the higher education sector. This has been reflected over the past decade in Job Descriptions and recruitment processes in HE and also applies to politics where Farage and Trump are seen to have more value through coming from outside the political system. Within higher education this has resulted in a rash of appointments of people from outside the sector to senior positions.
This is not necessarily a bad thing. I see the higher education sector like an ecosystem and too much inbreeding within too small a gene pool will lead to stagnation and mutation – in HE this can be seen as people adopting confirmation bias since meetings with the same cohort provide no novel insight or new interpretations on the original plan. On the other hand too much migration and churn will lead to a different but equally serious problem where specialist knowledge is lost to the organisation and sector and therefore decisions are not based on a full evaluation of evidence. The past influences the future so there is a balance to be struck. When you get new people and talent into an organisation you provide opportunities for cultural advancement and change. Ideas can move across domains in a way that allows things to happen. People ask questions like “why can’t you do it like that?” and you realise that because you had issues previously you have mentally blocked off an opportunity.
As an example I have had some of my richest conversations recently with Rosie Jones the new Director of Library Services. In her induction we discussed using gaming approaches in the workplace to stimulate new thinking as we both have backgrounds in serious gaming.
I have now begun applying some of these approaches in events that I am facilitating for Leadership in Digital Innovation. I wouldn’t have been able to make the mental leap without her fresh perspective on some of the organisational issues, adopting what Dave Coplin might describe as non-linear thinking.
My point is that stimulation is a good thing as it can build the conditions for the new system to emerge – but disruption by it’s nature means that, as Martin describes it, “there is no collaboration, working alongside, improving here”. It’s what Bernard Stiegler describes in his interview How to Survive “Disruption” as “a form of widespread dispossession of knowledge, of life skills and indeed of livelihood across Europe through the rapid political, social and technological changes to work and everyday life.”
Crucially for both education and politics we must seek to understand, value, and then challenge the current system in order to create the system we need.
The second of my OER17 posts. Having come down on the side of a loosely defining OEP, a connected strand was the idea of openness as a gift. In Maha Bali’s keynote she mentioned that gift giving can be problematic, we don’t always know that people want that gift, they feel indebted, and it may be inappropriate. In our panel session later, I wondered whether this was applicable to openness in general – we give the gift of open to people, in the assumption they will want it, or it will do them good. Maybe they don’t want it. In that sense maybe it’s like giving someone a dog – now, if it’s me, great, I love dogs, but others don’t and would feel a sense of burden them or it at least might not be appropriate at that stage in their life.
And to riff off another Maha thought, in our joint session about Virtually Connecting she made the analogy with local and maximum optimum from neural networks. This argues that you may think your at an optimum, but there could be a better one further away, but that it requires energy and resources to get out of your current one and reach that one. So for Virtually Connecting, maybe it’s at or near a local optimum for the people it can reach given the current set up. In order to reach another optimum, it might require a lot of resources (more people, funding etc). I wonder if this is true of openness, and OEP, also. We are not near a local optimum yet, but we might get to one that helps a lot educators do beneficial things with their learners, for learners to take control of aspects of their own educational experience, etc. But we’ve been operating under the assumption (I think) that it’s for everyone. Maybe, like the dog gift, it isn’t, or if it is, that is a whole other level of resource and energy required, and we should concentrate on finding the local optimum first.
PS – don’t actually give me a dog as a gift, this chap says no:
I was at OER17 last week (I have another post about the evolution of the OER conference coming up – but in short, great work everybody). I have a couple of posts now in an attempt to fuse together some strands that came out of that and subsequent discussions, particularly around the topic of Open Educational Practice.
The first strand is around definitions. Beck gave a good overview of definitions of OEP in her talk, which led nicely into a presentation from Catherine Cronin and Laura Czerniewicz on the use of critical pragmatism to address issues in OEP. Laura and Catherine took a fairly broad approach to what constitutes OEP, and a member of the audience raised the question that could lead to openwashing, if you have a loose definition then it becomes easy for someone to claim they are doing it. At the same time there was a post from David Wiley, who really attempted to pin this down with regards to ‘open pedagogy’:
open pedagogy is the set of teaching and learning practices only possible or practical in the context of the 5R permissions. Or, to operationalize, open pedagogy is the set of teaching and learning practices only possible or practical when you are using OER
This led to an almighty Twitter discussion, particularly from Mike Caulfield, who suffers from being way more intelligent than most of us and therefore bringing more to bear on any topic than I can usually accommodate. I certainly began to lose the will to live reading this thread (sorry Mike). It began to remind me of the old “learning object” definition debates. Remember how much we enjoyed them? Or even better the “all day debate” between Downes and Wiley from 2009 (I believe there are some alternate universes where this is still going on). Jim Groom blogged that he felt uneasy with this push to define OEP so tightly:
I am not interested in the strict rules that define open; open is not the ends, it is one means amongst many. But, I do wonder at the push to consolidate the definition beyond OERs into Open Educational Practices. Seems to me there is an attempt to define it in order to start controlling it, and that is often related to resources, grants, etc
I think this is where I’m coming around to – OER has benefitted from a tight definition, and so we thought OEP would also. But that tight definition works for content, not practice. We should stop focusing on OEP definitions and instead look to a general opening up of practice. And hey, if some things get a bit messy around the edges, we’ll have to live with that. So, in order to combat the need to define things, I’m going to offer, erm, a definition. This is roughly what I have in my head when we talk about OEP, and is broad enough to include interesting stuff:
Open educational practice covers any significant change in educational practice afforded by the open nature of the internet
That’s it. You don’t have to have the same definition, but that’s what I’m going with. And if that leaves too much room for doubt, then as Douglas Adams said “We demand rigidly defined areas of doubt and uncertainty!”
And here are the Dream Warriors to tell us about My Definition of a Boombastic Jazz Style
Scattered between my research presentations at LAK17 was my work as a member of the executive for the Society for Learning Analytics Research (SoLAR). The executive met daily during the conference – it is the only chance we have each year for face-to-face meetings. The LAK conferences also provide a venue for the AGM of the society and, despite the size of the room, where the AGM was held, it was standing room only for most of the meeting.
The executive also have a role to play in decisions about the conference itself, as well as acting as reviewers on the programme committee and chairs for the different sessions. Next year, at LAK18 in Vancouver, I shall be taking on a bigger role, as one of the programme chairs for the conference.
The picture shows me with half the SoLAR Executive at the post-LAK17 review meeting.
The European FP7-funded learning analytics community exchange (LACE) project came to an end last June. Since then, we have become a special interest group (SIG) of the Society for Learning Analytics Research (SoLAR) and we are now the learning analytics community Europe (LACE).
Although the loss of large-scale funding has meant scaling down our activities, we have still been active and our Twitter account reflects some of that work – including presentations on European learning analytics work in China, Japan and South Korea.
The LAK17 conference provided a chance for eight of the international team to get together and plan our next event, a workshop in our ethics and privacy in learning analytics series (EP4LA) that we are submitting to this year’s ECTEL conference.
Our LAK Failathon workshop at the start of LAK 17 generated the basic ideas for a poster on how the field of learning analytics can increase its evidence base and avoid failure.
We took the poster to the LAK17 Firehose session, where Doug Clow provided a lightning description of it, and we then used the poster to engage people in discussion about the future of the field.
Despite the low production quality of the poster (two sheets of flip chart paper, some post-it notes and a series of stickers to mark agreement) its interactive quality obviously appealed to participants and we won best poster award. :-)
Clow, Doug; Ferguson, Rebecca; Kitto, Kirsty; Cho, Yong-Sang; Sharkey, Mike and Aguerrebere, Cecilia (2017). Beyond Failure: The 2nd LAK Failathon Poster. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 540–541.
Our main paper at the LAK conference looked at the state of evidence in the field. Drawing on the work collated in the LACE project Evidence Hub, it seems that there is, as yet, very little clear evidence that learning analytics improve learning or teaching. The paper concludes with a series of suggestions about how we can work as a community to improve the evidence base of the field.
The room was full to overflowing for our talk and for the other two talks in the session on the ethics of learning analytics. If you weren’t able to get in and you want to understand the links between jelly beans, a dead salmon, Bob Dylan, Buffy the Vampire Slayer and learning analytics, I shall share the link to the recorded session as soon as I have it.
Ferguson, Rebecca and Clow, Doug (2017). Where is the evidence? A call to action for learning analytics. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 56–65.
Where is the evidence for learning analytics? In particular, where is the evidence that it improves learning in practice? Can we rely on it? Currently, there are vigorous debates about the quality of research evidence in medicine and psychology, with particular issues around statistical good practice, the ‘file drawer effect’, and ways in which incentives for stakeholders in the research process reward the quantity of research produced rather than the quality. In this paper, we present the Learning Analytics Community Exchange (LACE) project’s Evidence Hub, an effort to relate research evidence in learning analytics to four propositions about learning analytics: whether they support learning, support teaching, are deployed widely, and are used ethically. Surprisingly little evidence in this strong, specific sense was found, and very little was negative (7%, N=123), suggesting that learning analytics is not immune from the pressures in other areas. We explore the evidence in one particular area in detail (whether learning analytics improve teaching and learners support in the university sector), and set out some of the weaknesses of the evidence available. We conclude that there is considerable scope for improving the evidence base for learning analytics, and set out some suggestions of ways for various stakeholders to achieve this.
Monday 13 March was the day of the second LAK Failathon, this time held at the LAK17 conference at Simon Fraser University in Vancouver. This year, we took the theme ‘Beyond Failure’ and the workshop led into a paper later in the conference and then to a crowd-sourced paper on how we can work to avoid failure both on individual projects and across the learning analytics community as a whole.
We also took a consciously international approach, and so workshop leaders included Doug Clow and I from Europe, Mike Sharkey from North America, Cecilia Aguerrebere from South AMerica, Kirsty Kitto from Australia and Yong-Sang Cho from Asia.
Clow, Doug; Ferguson, Rebecca; Kitto, Kirsty; Cho, Yong-Sang; Sharkey, Mike and Aguerrebere, Cecilia (2017). Beyond failure: the 2nd LAK Failathon. In: LAK ’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 504–505.
If you can’t access the workshop outline behind the paywall, contact me for a copy.
The 2nd LAK Failathon will build on the successful event in 2016 and extend the workshop beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other’s failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other’s mistakes. It was very successful, and there was strong support for running it as an annual event. This workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other’s failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base.
A very busy week in Vancouver at the LAK17 (learning analytics and knowledge) conference kicked off with the all-day doctoral consortium on 14 March (funded by SoLAR and the NSF). I joined Bodong Chen and Ani Aghababyan as an organiser this year and we enjoyed working with the ten talented doctoral students from across the world who gained a place in the consortium.
- Alexander Whitelock-Wainwright: Students’ intentions to use technology in their learning: The effects of internal and external conditions
- Alisa Acosta: The design of learning analytics to support a knowledge community and inquiry approach to secondary science
- Daniele Di Mitri: Digital learning shadow: digital projection, state estimation and cognitive inference for the learning self
- Danielle Hagood: Learning analytics in non-cognitive domains
- Justian Knobbout: Designing a learning analytics capabilities model
- Leif Nelson: The purpose of higher education in the discourse of learning analytics
- Quan Nguyen: Unravelling the dynamics of learning design within and between disciplines in higher education using learning analytics
- Stijn Van Laer: Design guidelines for blended learning environments to support self-regulation: event sequence analysis for investigating learners’ self-regulatory behavior
- Tracie Farrell Frey: Seeking relevance: affordances of learning analytics for self-regulated learning
- Ye Xiong: Write-and-learn: promoting meaningful learning through concept map-based formative feedback on writing assignments
The intention of the doctoral consortium was to support and inspire doctoral students in their ongoing research efforts. The objectives were to:
- Provide a setting for mutual feedback on participants’ current research and guidance on future research directions from a mentor panel
- Create a forum for engaging in dialogue aimed at building capacity in the field with respect to current issues in learning analytics ranging from methods of gathering analytics, interpreting analytics with respect to learning issues, considering ethical issues, relaying the meaning of analytics to impact teaching and learning, etc.
- Develop a supportive, multidisciplinary community of learning analytics scholars
- Foster a spirit of collaborative research across countries, institutions and disciplinary background
- Enhance participating students’ conference experience by connecting participants to other LAK attendees
Our new paper is now – lead author Quan Nguyen – is available online in Computers in Human Behavior. It examines the designs of computer-based assessment and its impact on student engagement, student satisfaction and pass rates.
Computers in Behavior is locked behind a paywall, so contact me for a copy if you can’t get access to the paper.
Many researchers who study the impact of computer-based assessment (CBA) focus on the affordances or complexities of CBA approaches in comparison to traditional assessment methods. This study examines how CBA approaches were configured within and between modules, and the impact of assessment design on students’ engagement, satisfaction, and pass rates. The analysis was conducted using a combination of longitudinal visualisations, correlational analysis, and fixed-effect models on 74 undergraduate modules and their 72,377 students. Our findings indicate that educators designed very different assessment strategies, which significantly influenced student engagement as measured by time spent in the virtual learning environment (VLE). Weekly analyses indicated that assessment activities were balanced with other learning activities, which suggests that educators tended to aim for a consistent workload when designing assessment strategies. Since most of the assessments were computer-based, students spent more time on the VLE during assessment weeks. By controlling for heterogeneity within and between modules, learning design could explain up to 69% of the variability in students’ time spent on the VLE. Furthermore, assessment activities were significantly related to pass rates, but no clear relation with satisfaction was found. Our findings highlight the importance of CBA and learning design to how students learn online.
Nguyen, Quan; Rienties, Bart; Toetenel, Lisette; Ferguson, Rebecca and Whitelock, Denise (2017). Examining the designs of computer-based assessment and its impact on student engagement, satisfaction, and pass rates. Computers in Human Behavior (Early access).
I was invited by the Virtually Connecting team to present with them at OER17, and I of course, jumped at the opportunity. I’m a VC advisory buddy and have done a few VC sessions at conference but the work Maha, Autumm, Rebecca and others put in to making it work is tiring just to observe. For those of you who don’t know VC, it started as away of those not present at conferences to feel part of the experience. This is often realised through an hour session with a keynote or two after their talk, with someone onsite facilitating and a group of online people joining a Google hangout (which is recorded and shared on Youtube). The session is very informal, definitely not an opportunity for the keynote to just give their talk again, but rather to discuss issues. In this sense it more resembles the corridor/bar/coffee chats at a conference. One thing the team have been very impressive at working towards is varying the voices we get to hear, so for example the Hangout only allows limited guests, but they try to prioritise people who haven’t been in before, to get a diverse group and to allow everyone to feel welcome and able to contribute.
The team have conducted a number of focus groups (while I was sunning myself in Cape Town). Autumm has an excellent post on some of the paradoxes of inclusion these explore. Maha follows up, applying James Gee’s work on affinity spaces, which looks at how games go beyond the content itself to meta-spaces and communities. In relation to VC Maha comments:
There are the actual sessions which everyone can watch online or which people can even join and be part of the conversation. That’s the “thing” and it is valuable to many people. But there is also a meta thing that has more value for those who are part of it
This has certainly been my experience – I have been the guest on one, and the onsite facilitator for a few of them. This has influenced the physical experience at the conference also – I’ve made new connections with people I didn’t know who are the same conference (indeed I’m on another panel at OER17 with Jim Luke as a direct result of the VC connection). And VC has expanded the people I communicate and share with online.
From the focus groups I took away three things of interest:
1) Safety – Just like the GO-GN students, some participants stated how the VC sessions feel safe or comfortable, where it’s ok to ask all sorts of questions, to share concerns. As the online environment gets increasingly brutal, this is clearly an aspect that people value.
2) Interdisciplinarity – Nadine makes a point about being included regardless of staff role, discipline, education level, etc. The role of discipline in inclusivity is one we don’t always consider. It’s often difficult to go to a conference for many reasons and one of these is that ‘it’s not really my area’, particularly when budgets are tight. In an era that seeks to promote interdisciplinarity that is potentially important.
3) Democracy – this is one of the paradoxes that Autumm talks about, and something the VC team anguish over. In some ways by getting the keynotes to have sessions afterwards, it’s reinforcing a certain celebrity. But equally, these are often people that those who are remote want to talk to. Sherri makes the point that being ‘in the same room’ as an ed tech celebrity such as Tressie was a big deal for her, and being able to talk in a relaxed environment is liberating. But the team are also expanding beyond keynotes and getting a range of people in the sessions. VC is one of the examples of we can remove some of the formal barriers that traditional practice puts in place.
The team believe deeply in inclusivity, it is the sole purpose for VC existing really. But every inclusion can be seen as an exclusion. I think they get it right, but others may not, but I don’t know anyone who thinks about it as much as the VC team. As I mentioned in my OpenEd post, conferences should learn a lot from the approach and values of VC. The focus group videos are listed below, they represent part of an ongoing reflection about VC and its operation. Because as Morris Zapp said in Small World, ‘every decoding is another encoding’, so the job is never finished. Anyway, it’s a privilege to be involved with them, and a reminder that open practice still brings the good stuff.
With the rest of the OER Hub team I spent last week in Cape Town at the OE Global conference. Prior to every OE Global we run a two day seminar for around 10-15 GO-GN students. If you don’t know the GO-GN, it’s a Hewlett funded project, establishing a global community of PhD researchers in open education. During the two day seminar we bring together some of these to present about their work, share issues, talk about theories, debate methodologies, etc. Many of them then present at the main conference also. The whole motivation for setting up the network was to try and grow the OER research field, and to help many students who were often the only person in their host institution working in this field, which can be an isolating experience.
I think each year we have seen those aims realised to a greater extent, which demonstrates to me that the field is maturing. This year it was a real privilege to be with such an amazing group of researchers. Their research covers many areas of open education – OER usage by teachers, open education practice, critical theories of openness, MOOC learner experience, etc. There is also excellent global coverage. But what really impressed me this year was how the group bonded and used the opportunity to support each other, arranging a Slack channel, setting up ongoing discussions around theoretical frameworks, spending a morning in a quick hackfest, etc.
Beck recorded a lot of the participants talking about the impact GO-GN has for them, the video is below. Two things came out for me in this. The first was how several of them raise the idea that GO-GN is a safe place, where they feel comfortable exploring areas they’re not sure of in their work. A PhD is often a very exploratory process, and although it comes together in the end (usually) there are big basins of self doubt on the path to that goal. The second was how many had found useful connections that have really pull together their work, for example around someone else using a similar method. So, if you’re doing a PhD around OER, OEP, (or know someone who is), then get in touch with us. Next year’s OE Global is in Delft, Netherlands. We have a lot of activity going in inbetween, including our monthly webinar series.
GO-GNers, I salute you!
I’m on the advisory board for a project led by Laura Czerniewicz in Cape Town and Neil Morris in Leeds, examining the concept of ‘unbundling’ in higher ed. I came across unbundling first of all back in 2000 with Evans and Wurster’s Blown to Bits book. It’s important to remember that at the time, internet business was new, people didn’t know how it would turn out, and may were still saying it wouldn’t be a big thing. So anything that offered a reasonably intelligent analysis was seized upon. There was a lot that was useful in their book, setting out the idea that services that had previously been held together by the glue of physical location, became unbundled when they went online, because that glue was insufficiently strong to keep them together. Their classic example was the car showroom, which had new and used car sales, servicing and financing all in one place. Online, these became separate services. This all made sense, and we saw new car sales online, and finance was certainly affected. But car sales showrooms, still persist…
Like its close cousin Disruption, unbundling has been a favourite philosophy of the silicon valley start up. It has often been app;lied to education (even, erm, by me). This piece for example boldly states “The bundle of knowledge and certification that have long-defined higher education is coming apart”. The idea has some merit – if education moves online, do we need all the services: content production, examination, accreditation, support, etc to come from one provider? Maybe not, but higher education is not the same as car sales, no matter how much Richard Branson wants it to be. Selecting between those services is difficult, particularly for a learner who is a learner precisely because they don’t know what they don’t know. I know what I need to buy a car, even if I’m not a car expert. So having those elements in one bundle has a certain convenience. In short, the glue is stronger.
But the talk of unbundling is persistent and powerful. So I was pleased to be asked to be on the board of this project because it takes exactly the right approach in my view. It is asking good questions such as: what do people mean by unbundling? What are the drivers and motivation for it? W is the evidence of it in practice? What are the different models of unbundling? What are the impacts on learners, staff, society and business?
It is attempting to look for evidence for these in an unbiased, and rational manner. The problem with concepts like unbundling is that they get peddled by people who have an interest in getting the idea established (because their business depends on it), and then dismissed as nonsense by those of us inside the system, and the truth is probably somewhere in the middle. Research such as this can act as a “bullshit antidote”. One of the dangers is that the commentary Vice Chancellors and Principals get to hear is from the dynamic young software people with their unbundling start up. Being able to point to solid research that says things like “unbundling isn’t really happening on the scale they suggest” or “unbundling works well for these learners, but has these impact on staff” or “this model is viable, but has these costs”, or even “you can safely ignore it”. is the sort of research we should be providing for a number of ed tech concepts I feel. Luckily as an advisory board member I don’t have to do any of the hard work, just turn up every 6 months and nod sagely.
Readers of this blog will know that I’ve often criticised the theory of disruption, and particularly its application in education. I won’t rehearse those arguments again, but it wasn’t until Trump and Brexit that I appreciated how much disruption had transcended its original form. Initially, when digital industry was new on the block, it provided a useful way of thinking about the potentially massive changes coming to many industries. And we can’t say that newspapers, music industry, photography etc haven’t been completely altered by the arrival of digital technology (although often Christensen’s disruption falls down under close inspection and better theories are available). But disruption it turns out is not about the digital. That was just its original form. It has now mutated beyond its original host and become an altogether different form of virus. This is true of the Silicon Valley ideology it is so deeply rooted in also. As Audrey Watters puts it:
“Silicon Valley ideology – “Move fast and break things.” Move fast and break democracy. Move fast and break families. Move fast and break the planet.”
The significant tenets of disruption are revealed in Trump. They are that existing knowledge is not only irrelevant, but a dangerous impediment. Disruption isn’t even about business it turns out. When Christensen says:
“By doing what they must do to keep their margins strong and their stock price healthy, every company paves the way for its own disruption”
What this actually came to mean (even if it wasn’t his intention) was that knowledge of any area itself is viewed as a reason not to trust someone. Core to disruption is the romantic notion of the outsider riding in on their white horse to save the sector from itself. And it is essential that this person be an outsider, only someone unencumbered by domain knowledge and all its established bias can truly see the opportunity for disruption. That pretty much describes Trump’s whole campaign, from “drain the swamp” to “lock her up”. Secondly, disruption pitches itself as complete revolution – a displacement of the incumbent by the new arrivals. Microsoft may have worked with IBM in the early days, but ultimately they replaced them. There is no collaboration, working alongside, improving here. At this point I cast an eye over Trump’s appointments.
It turns out disruption is a key element in the unenlightenment because it explicitly prioritises an absence of domain knowledge and seeks to undermine expertise. That’s a hell of a legacy Clayton.
I’ve been reading (well listening to on Audible) Peter Biskind’s Easy Riders, Raging Bulls. It’s the account of New Hollywood, covering roughly 1969 to 1982, and plotting the rise and fall of the Hollywood auteurs such as Coppola, Scorcese, Altman, Friedkin and Bogdanovich. As is my wont, I’ve been drawing parallels with the education sector as I’ve been going through it. The tale is often portrayed as one of these plucky outsiders with artistic vision challenging the studio system, but ultimately failing and the money men then ruining cinema forever. Certainly when you consider the best films that arose form this period – The Godfather, Taxi Driver, The Exorcist, Jaws, Chinatown – then they stand up better than the ‘high concept’ films of the 80s – Top Gun, Die Hard, Basic Instinct – which followed.
But this simplistic take discounts the awful films created in this period which were the result of unchecked egos and a licence for self-indulgence (Hopper’s The Last Movie being a prime example). And what’s more, when you hear the inside story, these were often not the pure artists they are perceived of now – they are mostly nasty, megalomaniacs, with as much greed as any studio exec, driven by drugs, sacking people at will, and the sexism – wow, the sexism (I also read Julia Phillips’ You’ll Never Eat Lunch in This Town Again, which is a blast and really underscores the rampant misogyny in many of the new directors). These are very indulged men who wanted to create their own powerhouses. The reaction against them was driven by a desire for revenge from the studios who had lost money and been undermined by the new power of the director. The high concept movies of the 80s (they can be summarised in a sentence) were very much an accountant’s take on cinema. And while some of these are fun, they don’t approach the shambolic beauty of Apocalypse Now, say. But there was a reaction against this reaction also. And while now multiplexes are full of comic book movies, there is also a decent independent cinema circuit now, and a steady stream of intelligent, engaging movies that don’t require the director to think they’re a messiah to complete.
On to the parallels with education. I think we have a similar tendency to over-romanticise the academic culture of the 1970s. This was a time when universities were not subject to the managerial approach that dominates now. But like the new hollywood, this lack of accountability was not always a good thing for students. It also easily gave rise to a clique – people getting jobs for their pals was not uncommon. The problem was that this led, under New Labour particularly, to a desire to control those academics in the same way the studios wanted to control directors. Some of this has not been bad – the focus on helping students gain employment, improve student success, open up education to those beyond the usual elite – have all been a result of increased administrative and managerial approaches in higher ed. “Do what you want, the best will survive” approach to education that often persisted in the 70s ends up benefitting those in relatively privileged positions.
But as with the studio’s revenge, there is a downside to all of these. The increasing customerisation and fear of litigation/public failure has seen a move away from experimental pedagogy to safer options. This isn’t always to the benefit of students who don’t get to use that university time to really experience new ways of learning, and ultimately new aspects of themselves. The environment for academics has become increasingly pressured and controlled – and at the same time they are then criticised for being insufficiently innovative. I’m tempted to see MOOCs as the high concept equivalent in education – the Days of Thunder interpretation of university experience.
So, the question then is how do we get the balance right in allowing sufficient freedom, while still developing an environment that doesn’t allow (male) egos to run unchecked with scant regard for others? It has to be based on mutual respect between these two arms of the university – administration and academia. Too often people in both camps speak disparagingly of the others: academics ‘don’t live in the real world’; administration ‘just wants to control everything’. There is a bit of truth in both of these, I’ll admit, but we’ll need to get that balance right to avoid our students sitting through years of The Last Action Hero or At Long Last Love, when they could be watching Moonlight or Captain Fantastic.
Last week I had two experiences with forms of documents that can be a little, let’s say, dry. The first was writing learning outcomes on a new OU course, and for a MOOC on the bizMOOC project. The second was the launch of the ALT strategy document. It struck me that there was a similarity between these two types of document. They’re both potentially useful but often become mired in a particular vocabulary of their own that renders them largely meaningless to their intended audience.
I don’t think I quite succeeded in breaking through this with the problem with the learning outcomes in question, but I do feel that Maren Deepwell and the team at ALT managed it with the strategy. It’s been an interesting process and what has resulted is, I feel, a meaningful and engaging document. So for future reference for myself as much as anything, I’m recording what was important about the process.
Firstly, Maren took it seriously. This wasn’t something ALT were doing just because you have to have a strategy document, but then you put it in a drawer and never look at it again. For ALT this was seen as an opportunity to both produce a strategy that would guide the organisation and also to engage the community. Which is the second feature, to conduct the process in an open, collaborative manner. We held webinars, a face to face session, an open survey and invited comments all the way along. This was not a top down, management consultant derived strategy, but a bottom-up, community driven one.
Lastly it represented an opportunity to rethink, or at least tinker with, what such a document should be. We deliberately kept it short and written in an accessible language. But Maren also had the great idea to invite along the hugely talented Bryan Mathers for a ‘visual thinkery‘ session. During this he got the trustees to talk about ALT and the purpose of the strategy. From this he produced some lovely images. And these are of course, CC licensed. This gives a whole different feel and life to the strategy document I feel.
I’m not sure I could apply the same process to learning outcomes, but I feel there is something generalisable from Maren’s approach to this that could work there too.
The Innovating Pedagogy 2016 report. Now in Chinese.
Just back from a couple of trips to Luxembourg, where I was one of the team carrying out final reviews for the Lea’s Box and Eco projects. This was my third year reviewing Lea’s Box, but I only joined the Eco team for their final review.
Lea’s Box was ‘a 3-year research and development project (running from March 2014 to [January] 2016) funded by the European Commission. The project focussed on (a) making educational assessment and appraisal more goal-oriented, proactive, and beneficial for students, and (b) on enabling formative support of teachers and other educational stakeholders on a solid basis of a wide range of information about learners.’
Eco was ‘a European project based on Open Educational Resources (OER) that gives free access to a list of MOOC (Massive Open Online Courses) in 6 languages […] The main goal of this project is to broaden access to education and to improve the quality and cost-effectiveness of teaching and learning in Europe.’
After talking about learning analytics at the BETT show, I was invited to write about them for the Public Service Executive magazine. The hard copy of PSE goes out to 9,000 subscribers, while the online version goes out to a database of 50,000.
This article provides a short introduction to learning analytics for people considering introducing analytics at their institution. It introduces six areas for action, and briefly outlines what needs to be done in each of these:
Areas for action
- Leadership and governance
- Collaboration and networking
- Teaching and learning
- Quality assurance
- Capacity building
New paper out in the British Journal of Educational Technology, co-authored with a host of people. Lead author Liz FitzGerald plus Natalia Kucirkova, Ann Jones, Simon Cross, Thea Herodotou, Garron Hillaire and Eileen Scanlon.
The framework proposed in the paper has six dimensions:
- what is being personalised
- type of learning
- personal characteristics of the learner
- who/what is doing the personalisation
- how personalisation is carried out
- impact / beneficiaries
Personalisation of learning is a recurring trend in our society, referred to in government speeches, popular media, conference and research papers and technological innovations. This latter aspect – of using personalisation in technology-enhanced learning (TEL) – has promised much but has not always lived up to the claims made. Personalisation is often perceived to be a positive phenomenon, but it is often difficult to know how to implement it effectively within educational technology.
In order to address this problem, we propose a framework for the analysis and creation of personalised TEL. This article outlines and explains this framework with examples from a series of case studies. The framework serves as a valuable resource in order to change or consolidate existing practice and suggests design guidelines for effective implementations of future personalised TEL.
FitzGerald, Elizabeth; Kucirkova, Natalia; Jones, Ann; Cross, Simon; Ferguson, Rebecca; Herodotou, Christothea; Hillaire, Garron and Scanlon, Eileen (2017). Dimensions of personalisation in technology-enhanced learning: a framework and implications for design. British Journal of Educational Technology (early view).