The presenter, Gregor Kennedy, started by saying that this is a bit of an odd presentation.
He was presenting on behalf of quite a big team, including a couple of data mining experts. Learning analytics are a hot topic…”and why not?” Kennedy said that it is a gold mine for providing information to identify students who are at risk, and promoting a shared understanding. Learning analytics looks at the micro level of what students do in virtual environments (which are usually ‘hidden’) to understand what they are doing. Academic analytics offers a much more macro view, and is more of managers and administrators rather than students and educators.
The intelligent tutoring system grew out of the 60s – you have an area of knowledge within a domain, and the model indicates what students are expected to do within a specific pedagogical model. There is a long history of educators being interested in learning analytics, although they weren’t necessarily very sure what they were going to do with it. By the 1980s intelligent tutoring systems were discredited, in part because of the rigidity of the model and approach.
There are some concerns with learning analytics. They are often descriptive (useful), but do not complete the feedback loop for students. There is quite a rich body of research around how students use technology.
A demonstration of one of the simulations gave us an idea of what and how students can experience in this type of 3D environment and some of the benefits. The haptic controls mean that the students can ‘feel’ the different textures that they would if they were actually performing the surgery. Usually a surgeon will sit on the shoulder of a student to give feedback as the student performs surgery. In this trial there were 30 novices and 30 experts involved in simulator rums. Data was collected throughout and categorised (e.g. burr metrics, anatomical structure metrics, and bone specimen metrics). The idea was that by using the data they could gain a sophisticated understanding of what was ‘going on’. Forty-five percent of the surgeries were completed, with an average force magnitude of less than 0.23 Newtons, when this was the case 78% of these were performed by novices.
The presentation was interesting and it was good to see an approach evaluated so rigorously. The patterns of behaviour demonstrated by a novice were identified, and this means that feedback can be given to assist students to improve their skills. There is a balance between providing feedback, and knowing about a particular student’s behaviour. The way this was resolved was by looking at a surgeon’s usual micro pauses, and then make decisions as to whether feedback was given. Making meaning from the data is tricky – and there needs to be a conceptual framework that you need to keep going back to to make sense of the data (in this case going back to the surgeons). There is a lot of data and a lot of ‘noise’ too, which makes this tricky.
Future steps include providing different types of feedback (not just around force), and finding different ways of providing feedback to users who are concentrating on a task in a virtual world.