Skip to content
Syndicate content

Institute of Educational Technology > Feed aggregator > Categories > IET researcher and project officer blogs

IET researcher and project officer blogs

Textbook Heroes at CNX2014

Dr Beck Pitt's blog - Tue, 15/04/2014 - 14:22

Originally posted on :

“Textbook Hero” (Photo Credit: Beck Pitt)

Two weeks ago I was participating in CNX2014 at Rice University, Houston. Although I was mainly tweeting from the conference, I did take a few notes which I’ve reproduced below. These notes focus only on Monday’s sessions so, for a more detailed and varied account of what happened, check out the conference hashtag (#cnx2014) for lots of interesting participant reflections.

Connexions/OpenStax College Updates 

Connexions (CNX) is now 15 years old! And whilst the conference did look back at the conception of CNX and some of its achievements over the past decade or so, there were exciting announcements to come … First, there was the update from Daniel Williamson on last year’s OpenStax College (OSC) statistics on adoption and cost savings (see the great video from 2013 here, which was showcased at that event; Daniel had produced a cool updated version for this year.) OSC now has…

View original 1,509 more words


Risky Business

Will Woods's blog - Mon, 07/04/2014 - 09:39

In March I attended a visioning workshop held by the recently appointed Pro-Vice-Chancellor of Learning and Teaching, Prof. Belinda Tynan , and attended by 60 of my colleagues. The 60 were recruited through a competition for ideas, and the best ideas won the day, so the event had people from all levels and areas of the Open University which was a refreshing way to bring bright minds together. The workshop discussed where the Open University should be by 2025. The approach we took was designed by a group who work on Future Studies and involved starting at the global and gradually working down to our own turf; In the meantime losing the baggage of the here and now, and also finding ourselves forming a consensus by engaging in cross-fertilized discussions on topics to do with educational futures.

It’s fair to say that I found the workshop empowering and inspiring, it had everything from contemporary performance art to RSA style animation. I’m currently working on the area of “Innovation to Impact” which is very close to my heart and something I’ve been working to try to strengthen within the Open University over the past few years, working alongside Prof. Josie Taylor, the previous Director of IET, who has recently retired and with David Matthewman, the Chief Information Officer at the Open University.

Another supporter of this work has been the Director of Learning and Teaching, Niall Sclater, who has recently left the Open University to pursue new ventures. I raise my cap to Niall for the work he has done in the relatively short time he’s been at the Open University, including the introduction of the Moodle VLE (along with Ross MacKenzie) and the Roadmap Acceleration Programme, and most recently leading the Tuition Strategy work for the OU. I wish him all the best on his latest adventure! – I’m starting to feel like the last man standing in the TEL area.

Coming back to innovation, Ann Kirschner wrote a piece about Innovation in Higher Education a couple of years ago and many similar articles have since followed however I still enjoy reading her article as it appears to be well researched and still a good compass to where innovations are heading. Tony Bates also covered these areas recently in a blog post around a Vision for Learning and Teaching in 2020. We covered many of these and other aspects at the workshop but sticking to the topic of innovation and risk the main thing that rang true for me from the workshop was that we have become very “risk averse” (complacent) at the Open University and there was, among the 60 delegates a very strong sense that we needed to feel able to take some risks and to be more agile (a very overused word) to survive and thrive by 2025.

The “innovation pipeline” is a concept we’ve been considering (how to improve the flow between incubators and central areas, i.e. the journey from prototype to large scale mainstreaming). We want to improve this at the Open University and last year I gave a short presentation to the Learning Systems Advisory Group about that topic. I love the quote that I took from Ron Tolido, the CTO of Amazon, ”@rtolido At Amazon, you must write a business case to stop an innovation proposal, rather than to start one. Silences 90% of nay-sayers”. The Open University is no Amazon of however we do need some of the pioneering spirit…

 

…in the past week I have also attended an “executive away day” for the Institute of Educational Technology at the OU, organised by the new Director of IET, Patrick McAndrew. Patrick has always been an keen early adopter of technologies and new ideas and he is wanting to making some organisational transformations with IET showing the way. For example, at the away day we went through a micro version of an agile project, we had a scrum, a sprint, another scrum and a velocity check all within one hour in the afternoon of the away day. The project was to develop an induction for new starters and we all took on tasks and worked through them, helping each other out. We have now taken the step to becoming an agile unit.

I have been using an agile approach to some recent developments, in particular for iSpot where I was hoping to start using the agile or lean approach back in 2012 (see my magile post) but only actually achieved any form of agile methodology last year when we started running into trouble and found that we needed to resolve issues with a much tighter timeframe and resorted to frequent (not daily but every other day) scrums and short sprints of three weeks. This worked very well and we were transparent with the project team which kept things ticking over and very quickly (within nine weeks) turned the project around and got it back on track.

I believe that Patrick wants IET to be a leading light for the Open University to become an agile organisation. I fully support him in this and I will be doing my utmost to ensure that we embrace this and to prove that adopting an agile approach does not compromise on the quality of output.

There will be more from me on the L&T vision workshop outputs once they are officially synthesised, endorsed and made available in the public domain.


JiME Reviews April 2014

openmind.ed - Dr Rob Farrow's blog - Tue, 01/04/2014 - 11:12

This is the current list of books for review in the Journal of Interactive Media in Education (JiME) at the moment – if you’re interested in reviewing any of the following then get in touch with me through Twitter or via rob.farrow [at] open.ac.uk to let me know which volume you are interested in and some of your reviewer credentials.

Sue Crowley (ed.) (2014). Challenging Professional Learning. Routledge: London and New York.  link

Andrew S. Gibbons (2014).  An Architectural Approach to Instructional Design.  Routledge: London and New York. link

Wanda Hurren & Erika Hasebe-Ludt (eds.) (2014). Contemplating Curriculum – Genealogies, Times, Places. Routledge: London and New York.  link

Phyllis Jones (ed.) (2014).  Bringing Insider Perspectives into Inclusive Learner Teaching – Potentials and challenges for educational professionals. Routledge: London and New York. link

Marilyn Leask & Norbert Pachler (eds.) (2014).  Learning to Teach Using ICT in the Secondary School – A companion to school experience.  Routledge: London and New York. link

Ka Ho Mok & Kar Ming Yu (eds.) (2014).  Internationalization of Higher Education in East Asia – Trends of student mobility and impact on education governance. Routledge: London and New York.  link

Peter Newby (2014). Research Methods for Education (2nd ed.). Routledge: Abingdon. link

OpenStax College Survey Results (Part I)

Dr Beck Pitt's blog - Mon, 31/03/2014 - 01:20

Originally posted on :

CNX2013 at Rice University (Picture Credit: Beck Pitt CC-BY)

Exciting times! I’m currently in Houston and about to head on over to Rice University for CNX2014 tomorrow. I was lucky enough to attend last year’s conference and am returning this year to present some of our research findings on OpenStax College (OSC) textbooks as part of 1 April’s Rapid Fire panel session Efficacy: Are they Learning? with Denise Domizi (UGA) and John Hilton III (BYU). For more info on the conference check out the schedule here.

Over the past year or so I’ve been working with Daniel Williamson of OSC/Connexions to conduct research into the impact of OSC textbooks on both educators and students. To date, we’ve run questionnaires with both user groups and I’ve also interviewed a small number of educators about their use of the textbooks. Work is ongoing and I’m currently focused on creating case studies and looking for…

View original 2,353 more words


Learning analytics don’t just measure students’ progress

My article ‘Learning analytics don’t just measure students’ progress – they can shape it‘, appeared online in The Guardian education today, in the ‘extreme learning’ section.

In it, I argue that we should not apply learning analytics to the things we can measure easily, but to those that we value, including the development of crucial skills such as reflection, collaboration, linking ideas and writing clearly.

I also link to the #laceproject – Learning Analytics Community Exchange – a European-funded project on learning analytics.


Setting learning analytics in context

I organised a panel at Learning Analytics and Knowledge 2014 (LAK14) in Indianapolis on ‘Setting learning analytics in context: overcoming the barriers to large-scale adoption’.

Thanks to Shirley Alexander, Shane Dawson, Leah Macfadyen and Doug Clow for making it a great event, and commiserations to Alfred Essa who couldn’t make it at the last minute due to a cancelled flight.

Abstract

Once learning analytics have been successfully developed and tested, the next step is to implement them at a larger scale – across a faculty, an institution or an educational system. This introduces a new set of challenges, because education is a stable system, resistant to change. Implementing learning analytics at scale involves working with the entire technological complex that exists around technology-enhanced learning (TEL). This includes the different groups of people involved – learners, educators, administrators and support staff – the practices of those groups, their understandings of how teaching and learning take place, the technologies they use and the specific environments within which they operate. Each element of the TEL Complex requires explicit and careful consideration during the process of implementation, in order to avoid failure and maximise the chances of success. In order for learning analytics to be implemented successfully at scale, it is crucial to provide not only the analytics and their associated tools but also appropriate forms of support, training and community building.

The Slideshare below includes my sections of the panel presentation, and not the excellent presentations from the other speakers.

Setting learning analytics in context from Rebecca Ferguson

Discourse-centric learning analytics: DCLA14

I was one of the chairs of the Second International Workshop on Discourse-centric Learning Analytics (DCLA14), which ran as part of the Learning Analytics and Knowledge 2014 (LAK14) conference in Indianapolis.

Workshop notes available on Google Docs.

Programme overview

9:00 Chairs’ Welcome & Participant Lightning Intros

9.30 DCLA14: some questions to ponder… Rebecca Ferguson

9.45 DCLA Meet CIDA: Collective Intelligence Deliberation Analytics [pdf]

Simon Buckingham Shum, Anna De Liddo and Mark Klein

10.45 Automated Linguistic Analysis as a Lens for Analysis of Group Learning [pdf]

Carolyn Penstein Rosé

11.30 Designing and Testing Visual Representations of Draft Essays for Higher Education Students [pdf]

Denise Whitelock, Debora Field, John T. E. Richardson, Nicolas Van Labeke and Stephen Pulman

Discourse-centric learning analytics from Rebecca Ferguson

Future Internet Assembly, Athens

I was invited to attend the Future Internet Assembly in Athens, where I took part in a panel discussion: ‘Beyond MOOCs: The Future of Learning on the Future Internet‘. The FIA website includes video footage of the entire panel.

I spoke on my experience with the FutureLearn platform for massive open online courses (MOOCs). Since running its first course in September, FutureLearn now has more than a quarter of a million registered users and over half a million course sign ups.

I talked about the benefits that massive participation can offer to learners, educators and to society and about some of  the implications of MOOCs for the future of the internet, with a particular focus on authentication, interoperability and accessibility.

MOOC Platforms and the Future Internet from Rebecca Ferguson

Thinking Learning Analytics

openmind.ed - Dr Rob Farrow's blog - Wed, 19/03/2014 - 15:54

I’m back in the Ambient Labs again, this time for a workshop on learning analytics for staff here at The Open University.

Challenges for Learning Analytics: Visualisation for Feedback

Denise Whitelock described the SaFeSEA project which is based around trying to give students meaningful feedback on their activities.  SaFeSEA was a response to high student dropout rates for 33% new OU students who don’t submit their first TMA.  Feedback on submitted writing prompts ‘advice for action’; a self reflective discourse with a computer.  Visualizations of these interactions can open a discourse between tutor and student.

Students can worry a lot about the feedback they receive.  Computers can offer a non-judgmental, objective feedback without any extra tuition costs.  OpenEssayist the structure of an essay; identifies key words and phrases; and picks out key sentences (i.e. those that are most representative of the overall content of the piece).  This analysis can be used to generate visual feedback, some forms of which are more easily understood than others.

Bertin (1977/81) provides a model for the visualization of data.   Methods can include diagrams which show how well connected difference passages are to the whole, or to generate different patterns that highlight different types of essay. These can be integrated with social network analysis & discourse analytics.

Can students understand this kind of feedback? Might they need special training?  Are these tools that could be used primarily by educators?  Would they also need special training?  In both case, it’s not entirely clear what kind of training this might be (information literacy?).  Can one tool be used to support writing across all disciplines or should such a tool be generic?

The Wrangler’s relationship with the Science Faculty

Doug Clow then presented on ‘data wrangling’ in the science faculty at The Open University.  IET collects information on student performance and presents this back to faculties in a ‘wrangler report’ able to feed back into future course delivery / learning design.

What can faculty do with these reports?  Data is arguably better at highlighting problems or potential problems than it is at solving them.  This process can perhaps get better at identifying key data points or performance indicators, but faculty still need to decide how to act based on this information.  If we move towards the provision of more specific guidance then the role of faculty could arguably ben diminished over time.

The relation between learning analytics and learning design in IET work with the faculties

Robin Goodfellow picked up these themes from a module team perspective.  Data can be understood as a way of closing the loop on learning design, creating a virtuous circle between the two.  In practice, there can be significant time delays in terms of processing the data in time for it to feed in.  But the information can still be useful to module teams in terms of thinking about course:

  • Communication
  • Experience
  • Assessment
  • Information Management
  • Productivity
  • Learning Experience

This can give rise to quite specific expectations about the balance of different activities and learning outcomes.  Different indicators can be identified and combined to standardize metrics for student engagement, communication, etc.

In this way, a normative notion of what a module should be can be said to be emerging.  (This is perhaps a good thing in terms of supporting course designers but may have worrying implications in terms of promoting homogeneity.)

Another selective element arises from the fact that it’s usually only possible to collect data from a selection of indicators:  this means that we might come to place too much emphasis on data we do have instead of thinking about the significance of data that has not been collected.

The key questions:

  • Can underlying learning design models be identified in data?
  • If so, what do these patterns correlate with?
  • How can all this be bundled up to faculty as something useful?
  • Are there implications for general elements of course delivery (e.g. forums, VLE, assessment)?
  • If we only permit certain kinds of data for consideration, does this lead to a kind of psychological shift where these are the only things considered to be ‘real’ or of value?
  • Is there a special kind of interpretative skill that we need in able to make sense of learning analytics?

Learning Design at the OU

Annie Bryan drilled a little deeper into the integration of learning design into the picture.   Learning design is now a required element of course design at The Open University.  There are a number of justifications given for this:

  • Quality enhancement
  • Informed decision making
  • Sharing good practice
  • Improving cost-effectiveness
  • Speeding up decision making
  • Improve online pedagogy
  • Explicitly represent pedagogical activity
  • Effective management of student workload

A number of (beta) tools for Learning Design have been produced.  These are focused on module information; learning outcomes; activity planning, and mapping modules and resources.  These are intended to support constructive engagement over the life of the course.   Future developments will also embrace a qualification level perspective which will map activities against qualification routes.

These tools are intended to help course teams think critically about and discuss the purpose of tolls and resources chosen in the context of the course as a whole and student learning experiences.  A design perspective can also help to identify imbalances in course structure or problematic parts of a course.

Open Research: OER Research Hub Course Launches June 2014!

Dr Beck Pitt's blog - Wed, 19/03/2014 - 10:34

Originally posted on :

PIcture Source: The University of Utah: http://epubs.utah.edu/index.php/open

  • What is open research?
  • In what ways can research be open?
  • What do we mean by open in this context?
  • When is it appropriate for research to be open?
  • What does researching openly involve and what makes open research possible?
  • What are the benefits of open research?
  • What kinds of challenges might a researcher face when researching in the open?

Whilst researching in the open can be an enabler (blogging your research findings can potentially reach a wider audience, more quickly, than via the traditional paper publishing route; see for example one of Martin Weller’s “open scholarship example[s]“) it also has potential challenges (particularly ethical issues) that researchers might face or need to consider.

We’ll be exploring some of these issues and the questions above during a four week facilitated course on open research that we’re planning for June 2014. The focus of the…

View original 251 more words


Guerrilla Research #elesig

openmind.ed - Dr Rob Farrow's blog - Mon, 17/03/2014 - 15:41

We don't need no stinking permissions....

Today I’m in the research laboratories in the Jennie Lee Building at The Institute of Educational Technology (aka work) for the ELESIG Guerrilla Research Event.  Martin Weller began the session with an outline of the kind of work that goes into preparing unsuccessful research proposals.  Using figures from the UK research councils he estimates that the ESRC alone attracts bids (which it does not fund) equivalent to 65 work years every year (2000 failed bids x 12 days per bid).   This work is not made public in any way and can be considered lost.

He then went on to discuss some different digital scholarship initiatives – like a meta educational technology journal based on aggregation of open articles; MOOC research by Katy Jordan; an app built at the OU; DS106 Digital Storytelling – these have elements of what is being termed ‘guerrilla research’.  These include:

  • No permissions (open access, open licensing, open data)
  • Quick set up
  • No business case required
  • Allows for interdisciplinarity unconstrained by tradition
  • Using free tools
  • Building open scholarship identity
  • Kickstarter / enterprise funding

Such initiatives can lead to more traditional forms of funding and publication; and the two at least certainly co-exist.  But these kinds of activities are not always institutionally recognised, giving rise to a number of issues:

  • Intellectual property – will someone steal my work?
  • Can I get institutional recognition?
  • Do I need technical skills?
  • What is the right balance between traditional and digital scholarship?
  • Ethical concerns about the use of open data – can consent be assumed?  Even when dealing with personal or intimate information?

Tony Hirst then took the floor to speak about his understanding of ‘guerrilla research’.  He divided his talk into the means, opportunity and motive for this kind of work.

First he spoke about the use of the commentpress WordPress theme to disaggregate the Digital Britain report so that people could comment online.  The idea came out of a tweet but within 3 months was being funded by the Cabinet Office.

In 2009 Tony produced a map of MP expense claims which was used by The Guardian.  This was produced quickly using open technologies and led to further maps and other ways of exploring data stories.  Google Ngrams is a tool that was used to check for anachronistic use of language in Downton Abbey.

In addition to pulling together recipes using open tools and open data is to use innovative codings schemes. Mat Morrison (@mediaczar) used this to produce an accession plot graph of the London riots.  Tony has reused this approach – so another way of approaching ‘guerrilla research’ is to try to re-appropriate existing tools.

Another approach is to use data to drive a macroscopic understanding of data patterns, producing maps or other visualizations from very large data sets, helping sensemaking and interpretation.  One important consideration here is ‘glanceability‘ – whether the information has been filtered and presented so that the most important data are highlighted and the visual representation conveys meaning successfully to the view.

Data.gov.uk is a good source of data:  the UK government publishes large amounts of information on open licence.  Access to data sets like this can save a lot of research money, and combining different data sets can provide unexpected results.  Publishing data sets openly supports this method and also allows others to look for patterns that original researchers might have missed.

Google supports custom searches which can concentrate on results from a specific domain (or domains) and this can support more targeted searches for data.  Freedom of information requests can also be a good source of data; publicly funded bodies like universities, hospitals and local government all make data available in this way (though there will be exceptions). FOI requests can be made through whatdotheyknow.com.  Google spreadsheets support quick tools for exploring data such as sliding filters and graphs.

OpenRefine is another tool which Tony has found useful.  It can cluster open text responses in data sets according to algorithms and so replace manual coding of manuscripts.   The tool can also be used to compare with linked data on the web.

Tony concluded his presentation with a comparison of ‘guerrilla research’ and ‘recreational research’. Research can be more creative and playful and approaching it in this way can lead to experimental and exploratory forms of research.  However, assessing the impact of this kind of work might be problematic.  Furthermore, going through the process of trying to get funding for research like this can impede the playfulness of the endeavour.

A workflow for getting started with this kind of thing:

  • Download openly available data: use open data, hashtags, domain searches, RSS
  • DBpedia can be used to extract information from Wikipedia
  • Clean data using OpenRefine
  • Upload to Google Fusion Tables
  • From here data can be mapped, filtered and graphed
  • Use Gephi for data visualization and creating interactive widgets
  • StackOverflow can help with coding/programming

(I have a fuller list of data visualization tools on the Resources page of OER Impact Map.)

Rethinking Education

I was recently invited to Stockholm, to speak at the ‘Rethinking Education‘ conference run by the Ratio Institute. The conference objective was ‘to focus on the need to design for the future education and skills systems that enable young people and adults to develop the knowledge and skills needed in the labour market, as well as for personal development and important societal goals.’

My focus was on the benefits and challenges offered by MOOCs, with particular reference to FutureLearn.

Rethinking education from Rebecca Ferguson

OERs about accessibility

Dr Nick Freear's blog - Fri, 14/03/2014 - 14:45

In response to a recent webinar on Accessibility & OER, I thought I'd put together a list of some OER-like resources that I know of about accessibility. Please feel free to tell me about resources that you've found useful!

Web accessibility basics Dive Into Accessibility Web Accessibility
  • https://accessibility.makes.org/thimble/web-accessibility-
  • Publisher: Mozilla (Webmaker)
  • License: Is it OER-like? https://webmaker.org/en-US/terms ("... you hereby grant every user of the Webmaker a non-exclusive, worldwide, sublicensable royalty free license to use Your Submissions in connection the functionality available through Webmaker.") ("In the future, we may provide you with options to add Creative Commons or other …")
  • Notes: Webmaker is a fairly new platform from Mozilla.
Accessibility of eLearning

DISCLAIMER: I've probably missed some key resources - please contribute! Via: @nfreear on Twitter.

School of Open: Research Findings (Part II)

Dr Beck Pitt's blog - Fri, 14/03/2014 - 12:00

Originally posted on :

School of Open Logo (Source: https://p2pu.org/en/#schools)

Earlier in the week I reported on the preliminary Autumn 2013 pre-course questionnaire results of the work we’ve been doing with School of Open (SOO) over the past year. In this blog post, I’m focusing on the preliminary results of the post-course questionnaire.  We received a total of 22 responses from participants in two out of the four courses that we surveyed: Copyright 4 Educators (AUS) and Creative Commons (CC) for K-12 Educators. Unfortunately we did not receive any survey responses from those who had participated in the Writing Wikipedia Articles; the Bascis and Copyright for Educators (US) courses. There could be many reasons as to why this was the case (e.g. number of active students in the course at this stage, student awareness of the post-survey etc.). However, as with the pre-course survey, the post-course questionnaire was optional and we were very much dependent on individual participants deciding/being able to take time out to participate in our research. 

Consequently, and…

View original 1,604 more words


Ethical Use of New Technology in Education

openmind.ed - Dr Rob Farrow's blog - Tue, 11/03/2014 - 16:18

Today Beck Pitt and I travelled up to Birmingham in the midlands of the UK to attend a BERA/Wiley workshop on technologies and ethics in educational research.  I’m mainly here on focus on the redraft of the Ethics Manual for OER Research Hub and to give some time over to thinking about the ethical challenges that can be raised by openness.  The first draft of the ethics manual was primarily to guide us at the start of the project but now we need to redraft it to reflect some of the issues we have encountered in practice.

Things kicked off with an outline of what BERA does and the suggestion that consciousness about new technologies in education often doesn’t filter down to practitioners.  The rationale behind the seminar seems to be to raise awareness in light of the fact that these issues are especially prevalent at the moment.

This blog post may be in direct contravention of the Chatham convention

We were first told that these meetings would be taken under the ‘Chatham House Rule’ which suggests that participants are free to use information received but without identifying speakers or their affiliation… this seems to be straight into the meat of some of the issues provoked by openness:  I’m in the middle of life-blogging this as this suggestion is made.  (The session is being filmed but apparently they will edit out anything ‘contentious’.)

Anyway, on to the first speaker:

Jill Jameson, Prof. of Education and Co-Chair of the University of Greenwich
‘Ethical Leadership of Educational Technologies Research:  Primum non noncere’

The latin part of the title of this presentation means ‘do no harm’ and is a recognised ethical principle that goes back to antiquity.  Jameson wants to suggest that this is a sound principle for ethical leadership in educational technology.

After outlining a case from medical care Jameson identified a number of features of good practice for involving patients in their own therapy and feeding the whole process back into training and pedagogy.

  • No harm
  • Informed consent
  • Data-informed consultation on treatment
  • Anonymity, confidentiality
  • Sensitivity re: privacy
  • No coercion
  • ‘Worthwhileness’
  • Research-linked: treatment & PG teaching

This was contrasted with a problematic case from the NHS concerning the public release of patient data.  Arguably very few people have given informed consent to this procedure.  But at the same time the potential benefits of aggregating data are being impeded by concerns about sharing of identifiable information and the commercial use of such information.

In educational technology the prevalence of ‘big data’ has raised new possibilities in the field of learning analytics.  This raises the possibility of data-driven decision making and evidence-based practice.  It may also lead to more homogenous forms of data collection as we seek to aggregate data sets over time.

The global expansion of web-enabled data presents many opportunities for innovation in educational technology research.  But there are also concerns and threats:

  • Privacy vs surveillance
  • Commercialisation of research data
  • Techno-centrism
  • Limits of big data
  • Learning analytics acts as a push against anonymity in education
  • Predictive modelling could become deterministic
  • Transparency of performance replaces ‘learning
  • Audit culture
  • Learning analytics as models, not reality
  • Datasets >< information and stand in need of analysis and interpretation

Simon Buckingham-Shum has put this in terms of a utopian/dystopian vision of big data:

Leadership is thus needed in ethical research regarding the use of new technologies to develop and refine urgently needed digital research ethics principles and codes of practice.  Students entrust institutions with their data and institutions need to act as caretakers.

I made the point that the principle of ‘do no harm’ is fundamentally incompatible with any leap into the unknown as far as practices are concerned.  Any consistent application of the principle leads to a risk-averse application of the precautionary principle with respect to innovation.  How can this be made compatible with experimental work on learning analytics and sharing of personal data?  Must we reconfigure the principle of ‘do no harm’ so it it becomes ‘minimise harm’?  It seems that way from this presentation… but it is worth noting that this is significantly different to the original maxim with which we were presented… different enough to undermine the basic position?

Ralf Klamma, Technical University Aachen
‘Do Mechanical Turks Dream of Big Data?’

Klamma started in earnest by showing us some slides:  Einstein sticking his tongue out; stills from Dr. Strangelove; Alan Turing; a knowledge network (citation) visualization which could be interpreted as a ‘citation cartel’.  The Cold War image of scientists working in isolation behind geopolitical boundaries has been superseded by building of new communities.  This process can be demonstrated through data mining, networking and visualization.

Historical figures of the like of Einstein and Turing are now more like nodes on a network diagram – at least, this is an increasingly natural perspective.  The ‘iron curtain’ around research communities has dropped:

  • Research communities have long tails
  • Many research communities are under public scrutiny (e.g. climate science)
  • Funding cuts may exacerbate the problem
  • Open access threatens the integrity of the academy (?!)

Klamma argues that social network analysis and machine learning can support big data research in education.  He highlights the US Department of Homeland Security, Science and Technology, Cyber Security Division publication The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research as a useful resource for the ethical debates in computer science.  In the case of learning analytics there have been many examples of data leaks:

One way to approach the issue of leaks comes from the TellNET project.  By encouraging students to learn about network data and network visualisations they can be put in better control of their own (transparent) data.  Other solutions used in this project:

  • Protection of data platform: fragmentation prevents ‘leaks’
  • Non-identification of participants at workshops
  • Only teachers had access to learning analytics tools
  • Acknowledgement that no systems are 100% secure

In conclusion we were introduced to the concept of ‘datability‘ as the ethical use of big data:

  • Clear risk assessment before data collection
  • Ethcial guidelines and sharing best pracice
  • Transparency and accountability without loss of privacy
  • Academic freedom
Fiona Murphy, Earth and Environmental Science (Wiley Publishing)
‘Getting to grips with research data: a publisher perspective’

From a publisher perspective, there is much interest in the ways that research data is shared.  They are moving towards a model with greater transparency.  There are some services under development that will use DOI to link datasets and archives to improve the findability of research data.  For instance, the Geoscience Data Journal includes bi-direction linking to original data sets.  Ethical issues from a publisher point of view include how to record citations and accreditation; manage peer review and maintenance of security protocols.

Data sharing models may be open, restricted (e.g. dependent on permissions set by data owner) or linked (where the original data is not released but access can be managed centrally).

[Discussion of open licensing was conspicuously absent from this though this is perhaps to be expected from commercial publishers.]

Luciano Floridi, Prof. of Philosophy & Ethics of Information at The University of Oxford
‘Big Data, Small Patterns, and Huge Ethical Issues’

Data can be defined by three Vs: variety, velocity, and volume. (Options for a fourth have been suggested.)  Data has seen a massive explosion since 2009 and the cost of storage is consistently falling.  The only limits to this process are thermodynamics, intelligence and memory.

This process is to some extent restricted by legal and ethical issues.

Epistemological Problems with Big Data: ‘big data’ has been with us for a while generally should be seen as a set of possibilities (prediction, simulation, decision-making, tailoring, deciding) rather than a problem per se.  The problem is rather that data sets have become so large and complex that they are difficult to process by hand or with standard software.

Ethical Problems with Big Data: the challenge is actually to understand the small patterns that exist within data sets.  This means that many data points are needed as ways into a particular data set so that meaning can become emergent.  Small patterns may be insignificant so working out which patterns have significance is half the battle.  Sometimes significance emerges through the combining of smaller patterns.

Thus small patterns may become significant when correlated.  To further complicate things:  small patterns may be significant through their absence (e.g. the curious incident of the dog in the night-time in Sherlock Holmes).

A specific ethical problem with big data: looking for these small patterns can require thorough and invasive exploration of large data sets.  These procedures may not respect the sensitivity of the subjects of that data.  The ethical problem with big data is sensitive patterns: this includes traditional data-related problems such as privacy, ownership and usability but now also includes the extraction and handling of these ‘patterns’.  The new issues that arise include:

  • Re-purposing of data and consent
  • Treating people not only as means, resources, types, targets, consumers, etc. (deontological)

It isn’t possible for a computer to calculate every variable around the education of an individual so we must use proxies:  indicators of type and frequency which render the uniqueness of the individual lost in order to make sense of the data.  However this results in the following:

  1. The profile becomes the profiled
  2. The profile becomes predictable
  3. The predictable becomes exploitable

Floridi advances the claim that the ethical value of data should not be higher than the ethical value of that entity but demand at most the same degree of respect.

Putting all this together:  how can privacy be protected while taking advantage of the potential of ‘big data’?.  This is an ethical tension between competing principles or ethical demands: the duties to be reconciled are 1) safeguarding individual rights and 2) improving human welfare.

  • This can be understood as a result of polarisation of a moral framework – we focus on the two duties to the individual and society and miss the privacy of groups in the middle
  • Ironically, it is the ‘social group’ level that is served by technology

Five related problems:

  • Can groups hold rights? (it seems so – e.g. national self-determination)
  • If yes, can groups hold a right to privacy?
  • When might a group qualify as a privacy holder? (corporate agency is often like this, isn’t it?)
  • How does group privacy relate to individual privacy?
  • Does respect for individual privacy require respect for the privacy of the group to which the individual belongs? (big data tends to address groups (‘types’) rather than individuals (‘tokens’))

The risks of releasing anonymised large data sets might need some unpacking:  the example given was that during the civil war in Cote d’Ivoire (2010-2011) Orange released a large metadata set which gave away strategic information about the position of groups involved in the conflict even though no individuals were identifiable.  There is a risk of overlooking group interests by focusing on the privacy of the individual.

There are legal or technological instruments which can be employed to mitigate the possibility of the misuse of big data, but there is no one clear solution at present.  Most of the discussion centred upon collective identity and the rights that might be afforded an individual according to groups they have autonomously chosen and those within which they have been categorised.  What happens, for example, if a group can take a legal action but one has to prove membership of that group in order to qualify?  The risk here is that we move into terra incognito when it comes to the preservation of privacy.

Summary of Discussion

Generally speaking, it’s not enough to simply get institutional ethical approval at the start of a project.  Institutional approvals typically focus on protection of individuals rather than groups and research activities can change significantly over the course of a project.

In addition to anonymising data there is a case for making it difficult to reconstruct the entire data set so as to stop others from misuse.  Increasingly we don’t even know who learners are (e.g. MOOC) so it’s hard to reasonably predict the potential outcomes of an intervention.

The BERA guidelines for ethical research are up for review by the sounds of it – and a working group is going to be formed to look at this ahead of a possible meeting at the BERA annual conference.

School of Open: Research Findings (Part I)

Dr Beck Pitt's blog - Mon, 10/03/2014 - 14:37

Originally posted on :

School of Open Logo (Source: https://p2pu.org/en/#schools) CC-BY-SA

Ever wondered what it means to “be” open? Or what kind of practices could be considered open? School of Open (SOO) courses focus on a variety of aspects of openness (from understanding copyright in different contexts, to open licensing and what open science is) and offer the opportunity to share, participate and create in the open. In addition, participants can earn a badge to acknowledge contribution/course completion.

Over the last year Jane Park (Creative Commons) and I have been working together to find out more about those participating in SOO courses and what kinds of impact the courses are having. Our initial focus has been four facilitated courses, all of which award badges for different types of participation: Copyright for Educators (Australia), Copyright for Educators (US), Creative Commons (CC) for K12 Educators and Writing Wikipedia Articles: The Basics. You can find out more about the courses here.

Our…

View original 1,420 more words


Publication statistics

My ORO downloadstatistics

Our institutional research database, Open Research Online (ORO), has just released figures on the downloads for individual researchers.

The image shows my figures to date.

It is interesting to see how these compare with the citation figures that appear in Google Scholar.

For example, my thesis – The Construction of Shared Knowledge through Asynchronous Dialogue – has been cited twelve times to date, according to Google Scholar. Yet it has been downloaded 680 times from ORO, meaning that its reach is greater than the citations might indicate.

That figure also shows that uploading theses hugely increases their accessibility. I have ordered paper versions of theses and have found that they have only been signed out on two or three occasions – now they are much more easily discoverable, citable and applicable.


LACE – Learning Analytics Community Exchange

The LACE team

From 20-22 January, I was in Brussels for the kick-off meeting of the Learning Analytics Community Exchange (LACE).

The LACE project brings together existing key European players in the field of learning analytics and educational data mining (EDM), who are committed to build communities of practice and share emerging best practice in order to make progress towards four objectives:

1. Promote knowledge creation and exchange
2. Increase the evidence base
3. Contribute to the definition of future directions
4. Build consensus on interoperability and data sharing

This will involve organising a range of activities designed to integrate people carrying out or making use of learning analytics and ED research and development. LACE will also develop an ‘evidence hub’ that will bring together a knowledge base of evidence in the field. Members will also explore plausible futures for the field.

Core partners

Open Universiteit Nederland, Netherlands
Cetis, the Centre for Educational Technology and Interoperability Standards at the University of Bolton, UK
The Open University, UK
Infinity Technology Solutions, Italy
Skolverket, the Swedish National Agency for Education, Sweden
Kennisnet, Netherlands
Høgskolen i Oslo og Akershus, Norway
ATiT, Audiovisual Technologies, Informatics and Telecommunications, Belgium
EDEN, the European Distance Education Network, Hungary


Pre-teens’ informal learning with ICT and Web 2.0

Finally published online in Technology, Pedagogy and Education is our article on informal learning at primary school level. The research study focused on two groups of self-motivated learners, including one set who had set up their own Scratch programming club, and another group who belonged to a lunchtime robot-building club run by a parent.

The creative approaches to informal learning that these pre-teens used when working with new technology at home, contrasted with the approaches that they were able to use within school. Their strategies of using different devices, collaborating with others both face-to-face and electronically, and consulting a range of websites were all constrained in school settings. Other constraints were associated with their age – for example, their lack of access to credit cards made online purchases a complicated procedure, and many of their decisions about use of technology were related to a lack of money to spend. They were also limited by parental constraints and legal constraints to a much greater extent than children only a few years older.

While other studies have focused on differences in use of technology for learning at age 11, when children move from primary to secondary school, this study suggests that a more significant shift in use of technology for learning takes place at age 13.

Ferguson, Rebecca; Faulkner, Dorothy; Whitelock, Denise and Sheehy, Kieron (2014). Pre-teens’ informal learning with ICT and Web 2.0. Technology, Pedagogy and Education http://www.tandfonline.com/doi/abs/10.1080/1475939X.2013.870596#.UtWYzmTuKjE

Abstract

ICT and Web 2.0 have the potential to impact on learning by supporting enquiry, new literacies, collaboration and publication. Restrictions on the use of these tools within schools, primarily due to concerns about discipline and child safety, make it difficult to make full use of this potential in formal educational settings. Studies of children at different stages of schooling have highlighted a wider range of ICT use outside school, where it can be used to support informal learning. The study reported here looks beyond the broad categories of primary and secondary education and investigates the distinctive elements of pre-teens’ use of ICT to support informal learning. Nineteen children aged 10 and 11 participated in focus groups and produced visual representations of ICT and Web 2.0 resources they used to support their informal learning. Thematic analysis of this data showed that pre-teens respond to a range of age-related constraints on their use of ICT. Inside formal education, these constraints appear similar at primary and secondary levels. Out of school, regulation is more age specific, contributing to the development of tensions around use of ICT as children approach their teenage years. These tensions and constraints shape the ways in which children aged 10 to 11 engage in formal and informal learning, particularly their methods of communication and their pressing need to develop evaluation skills.