Scrum management framework

Published on Thursday, December 11th, 2014

With its scrums, sprints and stories, Scrum Management always sounds intriguing. I’ve been involved with several teams who have either used this system knowingly, or have employed elements from it. However, I’ve never seen the process formalised until I spotted it in the January 2015 edition of Wired magazine (where they had compressed a version of Jeff Sutherland’s book). Wikipedia tells me that this style of software development emerged in 1986 – so I guess I’ve been slow in investigating the approach.

Wired describes it as a seven-step process:

1. Small  teams. These should include the product owner, who has the vision and decides on the order in which things should be done, and the scrum master who facilitates communication and removes obstacles.

2. Tell stories. Each new features should be associated with a short story about the user and why the feature will add value for the user.

3. Assign effort points. Compare the stories and give them points for effort involved (or T-shirt sizes: small, medium, large and extra large).

4. Prioritise features. Each sprint should end with something that can be demoed, so make the chunks of work small enough to fit into a sprint.

5. Sprint. A sprint should be 1-4 weeks long – long enough to deal with a set amount of effort points.

6. Scrum. A 15-minute meeting every morning, standing up, so you’re not tempted to settle in. Three questions. What did you do yesterday to help finish the sprint? What will you do today to help finish the sprint? What obstacles need to be overcome?

7. Sprint review. At the end of the sprint the team meets to discuss what has been achieved, and to improve working practices for the next sprint.

 


I’m back!

Published on Tuesday, November 25th, 2014

I managed to lock myself out of this blog for over a year. First I forgot my password – then I forgot that I had created an in-box rule in Outlook that automatically junked any messages from my blog (I kept getting messages about moderating spam). So my password resets have all been vanishing into the ether.

Today I set a new in-box rule and spotted/deleted the old one. I’m back in.

First action – delete more than 10,000 spam comments that have arrived on the site while I have been away.


How to structure a literature review

Published on Monday, September 9th, 2013

It’s difficult to structure a literature review – you have read tens, or even hundreds, of articles, chapters, blog posts and presentations, and it appears almost impossible to pull them into shape and relate them to your own work. As a PhD student, or an early-career researcher, it is difficult to know how your contribution fits in.

One way forward is to treat your literature review as an art gallery (I guess a science museum would be a good alternative if you are not from an arts background).

You first welcome your reader / visitor to the art gallery and briefly point out that it deals with art and not science and, specifically, paintings. If they are looking for geological specimens, 19th-cenury curios or medieval tapestries, they are in the wrong place.

You walk them through the doorway – pointing out the names of famous painters engraved above the door, thus situating what you are showing them in the context of a tradition. You don’t need to dwell at the entrance, just show that you are aware of some of the greats who have gone before.

Next, you walk them down the corridor into the gallery of (for example) European art, pointing out the 16th-century gallery and the 19th-century gallery, taking them in more detail past the expressionists and the cubists. Here you are beginning to relate your work to some broad subject areas, showing awareness of how these have developed over time.

You pause to look closely at a series of paintings by Monet and Picasso, focusing the attention of your audience on two specific paintings. Here you are introducing the work most closely related to your own, drawing attention to salient points and identifying the gap that your work will fill.

Finally, you lead them into the new alcove you have constructed, to look at the contents of that alcove. This is the work that you will describe and explore in the following sections or chapters.

The route you have taken helps your audience to understand what they see in the new alcove. People coming straight to the alcove wouldn’t really understand what was going on there, and certainly wouldn’t be able to understand it in terms of what had gone before. People choosing their own path through the gallery might miss the significance of your alcove, or understand it in a completely different way.

Your tour guides your audience through the environment to your work. They may already know that environment very well, and be looking out for key landmarks, or even for their own work, but it is only you who can create for them the route that shows your work to its best advantage.


Learning analytics and ethics

Published on Tuesday, July 2nd, 2013

Way back when, I did some thinking about the differences between approaches to ethics in the Arts and the Social Sciences. To generalise, the Social Sciences treat the Internet as space, whereas the Arts treat the Internet as text. As I noted at the time: if you view the Internet as a virtual space populated by human actors, then you need a human subject approach to ethics, with informed consent a big issue. If, on the other hand, you see the Internet as an accumulation of texts, then your concern is with data protection, copyright and intellectual property rights. One practical example of this is that giving the real name of a data source is typically unethical behaviour in the Social Sciences, while failing to give the real name of a data source is typically unethical behaviour in the Arts.

So ethical behaviour is not a given – it is context dependent.

Extending this to learning analytics, a one-size-fits-all approach to ethics won’t necessarily work. Ethical research behaviour depends on what we are doing, on what we are trying to do and on what those involved in the research believe we are trying to do. The ethics discussion at #lasi13 suggested many of us are trying to different things – so maybe our approach to ethics will need to vary according to context.

Much of the discussion about the ethics of learning analytics this morning was framed in terms of learning as an economic transaction. The student contributes time, effort and money to an educational institution and, if this transaction is successfully completed, the student should emerge with a proxy for learning in the form of a certificate.

This view of learning is associated with  a view of data as a commodity to be owned and exchanged. In order for this transaction to be successfully completed, some exchange of data (attendance figures, grades, etc) is essential, and each party to the contract has rights and responsibilities in relation to the data.

So that implies a contractual perspective on ethics. My own work is in a different context – in informal or barely formal learning settings. Learning through social media, open educational resources, MOOCs, virtual worlds… The transaction is not overtly economic, the outcomes are more varied, the data have a different role. There is less sense of an obligation on either side. I suspect this means that the ethical concerns and priorities will be different, and that negotiating them will take us in different directions.

So one ethical code for learning analytics may prove impossible, we may need to shift from one to another according to context.


Approaches to pedagogy

Published on Thursday, April 25th, 2013

Great infographic summarising approaches to pedagogy, making connections with key thinkers in the field.

http://cmapspublic3.ihmc.us/rid=1LGVGJY66-CCD5CZ-12G3/Learning%20Theory.cmap


Educational Data Mining for Technology-Aided Formative Assessment

Published on Tuesday, February 12th, 2013

Notes on seminar by Dr Ilya Goldin

Dissertation ‘Peering into peer review with Bayesian models’

Interested in how we can help students who are learning to analyse open-ended problems. How do we help them to do peer review? Peer review removes the instructor from the interaction beteen students. How do we keep the instructor within the loop?

Students need feedback that explains to them their current level describes target performance and suggests ways of getting there.

Rubrics are used in peer review to inform assessors of criteria, to support reviewers in their evaluation, and to give a structure to the feedback received by the author. They state the criteria of interest and define each criterion.

When dealing with open-ended problems you need to focus on argumentation. Generics rubrics can be replaced by domain-relevant rubrics or by problem-specific assignments. However, the rubric is then more limited in its scope.

Experiment was run with 58 law students. Each essay received four peer reviews, these were passed on to the authors (who had adopted pseudonyms), and then the authors gave feedback on the feedback. Assessed pre- and post-measures on a quiz. Students received one of two rubrics – one that was domain specific and concept oriented and one that was domain relevant and argument oriented.

Domain relevant was focused on issue identification, argument development, justified oveall conclusion and writing quality. For each dimension you were given one anchored rating and 1 comment. eg 3 – develops few strong, non-conclusory arguments, and neglects counter-arguments. (Prior research suggests that if people just give a rating, these tend not to be as well justified.)

Problem-specific rubric was focused on breach of non-disclosure, trade secret misappropriation, violation of right of publicity, and two examples of idea misappropriation. Here, an example of a review might be

3 – identifies claim, but neglects arguments pro/con and supporting facts; some irrelevant facts and arguments.

This rating scale could be used with many problems, if you were aware what the key issues were.

If students were taken as individuals, and you looked at an average of what peer review scores were, they were not helpful for predicting instructor scores, However, if you worked on the basis that scores in the class were likely to be related to other scores in the class, then it was possible to predict instructor scores.


OpenPAD

Published on Monday, February 4th, 2013

HEA accreditation is a ‘recognition of commitment  to professionalism in teaching and learning in higher education’.

You can be an associate of the HEA, a Fellow, a senior fellow or a principal fellow.

At the OU, this will be replacing our taught courses such as the postgraduate certificate in teaching and learning in HE and the postgraduate certificate in academic practice.

People who benefit will include central academic staff, staff tutors, ALs…

To participate – engage with the wiki, the resources and the faculty-based forums on the OpenPAD Moodle site, have support from a faculty mentor, completion of practitioner inquiry based on own practice, this should take 6-12 months, and will be assessed by he accreditation panel for OpenPAD fellowship. This will lead to both internal and external accreditation.

IET provides accreditation for staff via the HEA.


Alpine Rendezvous: Workshop overview

Published on Wednesday, January 30th, 2013

About 116 people registered to attend the Alpine Rendezvous this year – 10 workshops, almost every country in Europe represented and several attendees from outside Europe.

Report from Workshop 1: Orchestration

How do teachers orchestrate events inside and outside the classroom?

First model – started with a very dry mathematical model. How does the teacher manage their workload? This only applies if you think of the classroom as a shoebox.
Second model takes the context and the students into account.
Third model include meaning making.
Fourth modeltake emotions and feelings into account (emotional intelligence of teachers and learners).
Fifth model include identity as a driver of learning.

Grand Challenge: Link with the emotions of teachers. Make teachers happy throughout their working lives – and still believing in and expecting things from all their students.

Workshop 2: Data analysis and interpretation for interactive LEs (DAILE13)

Mainly computer scientists

Paper along the lines of analytics for intelligent tutoring systems, and also to support decision-making for different actors

Grand challenge: Interactive learning analytics: from accountability to ‘opportunity management’ in a multi-actor perspective

Moving beyond the focus on learners and including data from other actors. Want to use analytics in a socially responsible way. Consider the interdependence of analytics feedback on decisions and ultimately on power relations and empowerment. Make human responsibility explicit. Support reflection and openness.

Grand challenge: towards adaptive and adaptable learning in massive online courses

Workshop 3: (Our workshop) – Teaching Inquiry into Student Learning, Learning Design and Learning Analytics

Grand challenge: Empower the future teacher

Workshop 4: Smart Cities

Concerned with well-being of people in those cities. A way of optimising resources, including time. Smartness is different from country to country. The UK doesn’t care much about environment, Finland scores very high on governance. So there are cultural issues involved.

This can become a consumer approach – in which citizens consume the smart cities that have been developed through them. An alternative approach would be a bottom-up approach, achieved with and through learning

Should we talk about a smart city or about a smart territory? The most important thing seems to be the space of flux around the city – for example the commuter belt.

They used Villard as a case study including interviews and tour. Identified perceived needs and came up with actions such as a Vercors card giving access to benefits and facilities in the area, learning through space gamification, learning about Villard life by monitoring relevant traces and emergent behaviours.

Multidimensional monitoring embedded into the learning (learning analytics aspect).

Grand Challenge: International observatory on smart city learning. To raise awareness and attract people to get involved.

Grand Challenge: Promote smart city learning and people-centred smart cities / territories

Workshop 5: Crisis and response

Some of the questions that emerged: Political and pedagogic implications of the interpenetration of real and virtual worlds. How are digital cultural resources distributed? What are the candidates for a mobile, highly networked pedagogy? Investigate and advocate for pedagogies of meaning making, identity formation, contingency and (resilience to) provisionality

Grand challenge: How can TEL contribute to resolving educational inequalities?

Democratise access to learning through digital means. Need a more rigorous identification of the role TEL developments are playing in the systemic inequalities. Europe has some of the historically most democratic and emancipatory education systems in the world.

Crisis of legitimacy in the face of open online education

Can we significantly alleviate inequalities of educational outcome?

Examine the big picture of digital capital and capability across Europe.

Workshop 6: Technology support for reflecting on experiences and sharing them across contexts

If you search ‘technology enhanced learning’ and ‘vocational’ on Google you don’t get many hits.

Vocational learning is dual centre – you have your workplace and you have your classroom. How can what you learn in these two contexts be integrated?

The partial solution is called the Erfahrraum (experience space). This has collection, validation and exploitation phases, bringing together practical and conceptual knowledge.

Workshop 7 (Coming up): Challenges of analysing multi-scale and temporal data

Existing research methods to not fully utilise the temporal information embedded in the data which reduces their explanatory power and limits the validity of their conclusions.


Catwalk technologies

Published on Thursday, January 17th, 2013

Notes on ‘You heard it here first’ seminar at the OU from Anne Adams.

When considering research about innovation, is it catwalk or ready-to-wear? Is it ready to use off the shelf, an innovation that people can take up and use, or is it a catwalk approach, testing and showcasing what is possible without suggesting that this will be taken up as it is? A catwalk approach may bring together elements from many different approaches and disciplines. Participants need to know which you are aiming for, so they don’t expect something they can take away and use if the project is about experimentation and high-tech solutions.

A ‘boundary creature inhabits more than one world’. They may move between practice domains and can be seen as a deviant from the norm, a form of monster (Donna Haraway, 1991).

Wenger’s view is that distance learning locates learning closer to the learner because it goes to people in their space rather than expecting learners to shift into the academic space.

Technology can act as a boundary object, crossing knowledge domains and structures. It can support communication and collaboration by acting as a shared interface. However, it can form a barrier if it is too associated with jargon or with specific practices.

Researchers bring:

spatial acuity – sensitivity to spatial issues in the environment such as weather and use in space and spatial triggers

temporal acuity – perception and reality of time in relation to the environment including time taken to learn systems, flow of time in the lab

socio-political astuteness – perceptions and interactions of a variety of stakeholders, around inhibitions, safety, expectations and ideologies


OLDS MOOC Launch – liveblog notes

Published on Monday, January 7th, 2013

Launch event for the Open Learning Design Studio massive open online course.

Lots of people struggling to be here – the Cloudworks link is down and YouTube doesn’t seem to be broadcasting anything. Some people can pick up via Stadium – others are having less success. Lots of discussion across Google+, Twitter and Facebook. Some people getting in after half an hour, others still struggling. I’ve given up on online access and walked downstairs to see it in RL.

Keywords around the field of learning design: context (local theories, metic reasoning and ecology of resources), practice (pedagogic practice, design practices, epistemic practice and learning as confluence), representations (designing representations, representations of design, design principles, design patterns, design narratives), curation and tools (tools include CADMOS, Collafe, Learning Designer, Pedagogical Pattern Collector).

This is a project-based MOOC.

Cycle: initiate, investigate, ideate, connect, prototype, curate, evaluate, reflect.

Initiate is about conceiving the project and finding a team to work with. Working on something that is a real concern, that you can work on with a team of people.

Investigate is about investigating the context in which you are working. How to work from that context to achieve your aims.

Ideate Focus is on techniques for brainstorming designs and generating ideas.

Connect your ideas with other people’s ideas and look at design patterns. Which patterns and principles are applicable to our project.

Prototype Build a quick initial implementation of your idea. Something you can work with and play with.

Curate

Evaluate – Maybe evaluate the design as an artefact or evaluate it through use

Reflect on the process and through this, look at various processes of learning design.

Prezi: http://prezi.com/b44jwdgvs8nl/olds-mooc-introduction/