Categories
Uncategorised

CW+ Part 2

Since my last blog post, my work at CW+ has quickly turned from theories and ideas into a fully-fledged set of interactive musical tools, which are now being regularly tested on the older-people wards at Chelsea and Westminster hospital.  A video further down this blog post shows each of these in action, with a brief explanation of their features.

Following my first forays into hospital-based music, as detailed in my previous blog post, I undertook further investigatory trips to the wards, including interviews with ward managers and occupational therapists; with the help of the CW+ team we began to piece together a plan for how my work might fit in.  We identified several areas of older-people care in which interventions based on interactive music technology might help to address specific clinical needs: these included the exercising of fine motor skills and upper limb movement, stimulating cognitive processes associated with pattern identification and rhythm, and redeveloping visual scanning and communicative ability, particularly in stroke patients.  On top of this, however, was the desire to also give patients a sense of musicality, allowing them to assert their own creative identity amidst a potentially alienating environment.

I began to put together some early ideas for interactive music tools which might address these challenges, and for this I turned to the visual programming environment Max MSP.  Since I was introduced to this software at Brunel University I’ve been completely hooked on its accessibility and flexibility: in short, it allows simple connections and transformations to be made between sound, video, and almost any other multimedia input.  It can easily be controlled via wireless devices, with downloadable apps such as TouchOSC providing the ability to quickly design effective touchscreen interfaces.

I knew from my conversations with occupational therapists that simplicity of design and comprehension was going to be paramount in these tools, so the first interface I built consisted only of a large red button marked ‘PUSH’.

Screen Shot 2016-03-15 at 21.25.02

When tapped with a regular beat the button triggers the playing of a MIDI file one beat at a time; each tap is another beat, and this effectively controls the tempo of the music with all subdivisions of the beat also matching the user’s tempo.  In essence, the user is completely in control of the speed of the music, and can accelerate or decelerate as they see fit.  Not only does this exercise a patient’s fine motor skills and rhythm-pattern recognition, but it also allows them the creative freedom to explore the musical effect of tempo changes: making ‘rubato’ fluctuations, slowing the music to a crawl or speeding it up to an inhumanly-fast gallop.

Further developments of this tool were made in collaboration with the stroke-specialist occupational therapists at the hospital, who suggested that an array of buttons to push might further exercise visual scanning and challenge upper-limb movement.  To this end, variations on this interface were built in which a selection of buttons had to be pushed in the right order corresponding to beats in the music, and a further version required the user to follow a randomly moving button.

Screen Shot 2016-03-15 at 21.25.24

Screen Shot 2016-03-15 at 21.25.41

Continuing the theme of simplicity, the next tool developed for the project was a coloured keyboard display with 8 large keys. In the first version of this the keyboard was just a tool for ‘noodling’, creating quick melodies and improvising, with an optional ambient backing track of slowly shifting chords. In the testing stages, however, it quickly became clear that users wanted more of a challenge, so a version was developed in which a well known tune could be played to a matching backing track. The next note of the melody would light up just before its place in the melody, and if the user tapped it in time the music would continue at the same tempo. Like the ‘tempo tapping’ tool described above, the mechanism of this tool also allowed users to control the speed of the music, and ensured that however slowly the tune was realised the accompaniment would move at the same tempo.

Screen Shot 2016-03-15 at 21.26.34

Further development of both of these tools continued throughout the testing in response to feedback from the occupational therapists.  For example, it was noted that whilst both tools gave the user a taste of the manual dexterity required to play an instrument, they lacked the sensation of actually pushing a key or button; in response to this the system was adapted so that pressing the correct button was matched with a vibration in the touch screen, creating a degree of ‘haptic feedback’ in which the user can not only see and hear when the right note is pushed but also feel it.

Another example of a very specific development of the interface was made in response to a discussion about a common symptom of stroke patients in which one side of the patient’s vision is neglected.  The stroke-specialist occupational therapists explained to me that they try to get patients to compensate for this with specific techniques, such as identifying a clear focal point on their ‘neglected’ side from which to start a scan of their field of vision.  Following this discussion, subsequent revisions of both the multi-button ‘tempo tapper’ and the keyboard interfaces featured a bright white bar down one side of the screen, in order to help patients with such symptoms practise this process.

The final tool to be developed was a more complicated interface resembling a ‘mixer’, in which users can combine a range of different computerised instruments to create an individualised musical texture, and then improvise over it with the keyboard tool described above.  Users also get the option to adjust the register (high to low) and volume (loud to quiet) of each instrument, and can therefore intricately personalise the music they create.  The notes which the instruments play are semi-randomly generated as the piece progresses, drawn from a sequence of random Lydian modes which underpin the music.  As the instruments used are run in the software Logic it is also possible to quickly record the results, creating whole pieces of complex multi-layered music at a patient’s bedside.

Screen Shot 2016-03-15 at 21.27.08

 

As I wrote at the start of this post, I am now well into the testing phase of these tools, an experience which has been consistently fascinating for all sorts of different reasons.  There have been some striking and memorable successes, such as the achievement of one stroke patient to keep tapping a steady beat despite being unable to speak and having almost no upper-limb mobility; a return visit also saw her begin to improvise using the keyboard interface, and her relatives reported that she was trying to sing after we had left.  Another patient I saw had difficulty focusing her attention on a task for more than about thirty seconds, but when she used the interactive tools described above she not only quickly grasped their usage but remained absorbed for several minutes on end.  I have made several return visits to this patient, which culminated in recording her improvising over her own personalised backing track; she explained to me that after these visits the dexterity in her hands and fingers had begun to improve, and that she wanted to purchase a piano keyboard in order to continue this progress.

These examples represent two ends of a broad spectrum of severity in terms of the condition of patients using these tools, and it quickly became clear that the correct judgement of what level of engagement was appropriate for each patient had to be judged carefully.  Some were only able to listen to the music, whilst others engaged to different levels within each of the three interfaces, and it was up to me to decide how far to go during each trial.  This reminded me a lot of band-leading, in that the leader must be ultra-sensitive to the personalities and mindset of the musicians under his leadership, in order to know when and how to push them to their musical limits.

There were two particularly interesting responses which emerged on several occasions.  The first revealed how different the acts of listening to music and making music are in some peoples’ minds: many were more than happy to have music played to them, but when asked to try tapping the beat of that same music they would decline, responding that it would be too hard, or require a musical skill or level of concentration they lacked.  The second common response to the trial was, astonishingly, to offer me payment: even some of those who simply listened to the music seemed to be under the impression that they would have to pay for the experience.   Many were astonished that I wasn’t trying to sell the interfaces to patients, and one lady was so impressed she demanded to have her name put on the ‘waiting list’ to buy them when they became commercially available.

These two responses combined to strike me with a realisation of just how rarefied and commodified music-making appears to be in our culture, and how carefully the invitation to use these tools must be considered, so as to not lead patients to the assumption that they are somehow ill-equipped to engage with them.  Just like a music-leader in a school or community setting, the art of applying such interventions may be in convincing the participant that real expressive music-making is not the sole domain of trained musicians.

My most recent addition to the interfaces has been to add automatic data tracking and reporting of each use, monitoring a patient’s speed and deviation of tempo and saving it for analytical purposes.  This will support the final part of the project, in which the impact on patients of using these tools will be evaluated, and the future use of these kind of interventions will be considered.  Certainly the impact on me as a musician has already been profound, and the project continues to reveal fascinating insights on the possibilities and practicalities of music-making within this most unusual of environments.