Empowering Technologies: The bad, the good, and the meaningful

Some time ago I was invited to a workshop titled ‘Empowering Technologies, to be held in the Design Lab of the University of Twente. The specific focus of the workshop was to “discuss ways in which we can ‘empower’ people with cognitive impairments in daily life settings”. The list of potential ’empowers’ included nearly all buzzword technologies of the recent years: “augmented reality, internet-of-things, ubiquitous computing, ambient intelligence, . . .” (it didn’t include blockchain, yet, or may be it was hidden under the three dots’.)

Besides my (or rather ours, as Summ()n) interest in new technologies, the topics of this kind are particularly dear to me: One of my degrees is in clinical psychology and I even work in a psychiatric clinic earlier in my life. This makes me a bit better informed than your average workshop participant, but also – and perhaps even more importantly – much more sensitive to rash and heedless application of ‘new technologies’ in this domain.

Anyway, I decided to go (partly due to the pleasure to visit the Design Lab, which is known for its creative atmosphere. Ironically, but the workshop was held in the very same room where Tom Fisher was making his presentation during the last year’s conference on Technology Mediation (see a short piece about this event here Theorizing Technological Mediation @ U Twente). We later met, and talked, and talked again, and all that lead to a very interesting project about the ‘futures of smart textile’ later on.

I came a bit later and missed the introduction of the participants, but I sense it was a mix of researchers from the University itself (including some students, of different level), some representations of the care centers, and some people from tech companies.  The workshop started from an opening presentation by Jelle van Dijk (whom I know for years and who is currently with the Human Centred Design team of the University of Twente). Jelle presented, albeit very briefly, a wide range of different ‘technologies’ (see above) that could be used to ’empower’ people, and then suggested to discuss these ’empowering opportunities’ in smaller groups.

Here I have to add that there was a certain ‘homework’ to be done prior to the workshop. We’ve bene asked to bring an example of the following:

1. A good example of an inspiring ‘empowering technology’ (from your organisation, or someone else’s)
2. A ‘bad’ example of a technology that is often used, but in your vision does not present the right way to go
3. An inspiring future vision/ new design that is not on the market, but would provide a radical breakthrough in empowering technologies practices.

 

I took the assignment (a bit too?) seriously and had all the above with me. Judging from the discussions that we had in our mini-groups I have a feeling that I was the only one  who did so; true, people were quick in presenting the ‘good’ example, usually of their own current project 🙂

Which is ok, and I was glad to (finally) meet with Kristin Neidlinger, the founder of Sensoree. We spotted a few of her concepts earlier and used some of them as the ‘future signals’ (most notably her NeurotiQ emotive display that we used in our Future of the Neuronet project).

During this workshop Kristin presented her latest creation (if I remember it is called a Mood Sweater), a a kind of scarf that can illuminate your ‘mood’ (both in terms of displaying it and also changing, hopefully for better).

There were few more demonstrations of ‘good projects’, both during the mini-group discussions and also including a stand-alone presentation by one of the U Twente MA students who presented her recent project (development of a planner for people with certain mental deficiencies).

Partly due to the lack of time, but also because of the selection of the ‘good’ projects to present the discussions were ‘too nice to have’. It is not easy to formulate a critical take on certain project (especially if it is passionately presented by its very author).

I was in a better position because the cases that I brought were not mine, so the discussion could be (a bit more) neutral and informative. To formulate my points I will have to present these cases here, too.

The Bad.

My example of ‘bad empowering’ comes from the project run by the MIT CSAIL announced earlier this year. The idea of the project was to create a smart (as in ‘AI-smart’) wearable system that could detect a conversation’s emotional tone and feed these data back to the interlocutors (read more about this project here).

Itself it is a very interesting topic, and otherwise I would praise the team. The problem starts when they announce that “One day this deep-learning system coupled with audio and vital-sign data could serve as a “social coach” for people with anxiety or Asperger’s.”

The development of a complex system that will be monitoring and meaningfully interpreting our conversation is a huge challenge on its own. On top of that the project will inevitably encounter the problems of introducing this aid to the real tissue of our talks, that are quite intricate and that usually resist any direct interventions (we may not remember the clashes that emerged when notebooks entered our conversation – the real paper ones, not digital; but we all do remember, and in fact still experience these clashes when somebody tries to type on laptop during our talk, or use a mobile phone  – or even simply look at a watch!

The last examples may be particular relevant here, because the team decided to use the latest gizmo from Samsung as a platform to carry this system; this one:

To use this hugely technologized device in a any meaningful way would be a tough challenge for all of us (so called ‘normal people’). Now, why on Earth the team wants to start from the people who already struggle with communication problems?

We of course now the answers: it is because it looks more ‘noble’ and ‘moral’ – but also sexy and attention-grabbing in the contemporary media landscape (and also because it is way easy to get subsidies and grants for these politically correct projects).

The fact that for these vulnerable ‘target audiences’ the system of this kind will be not a solution of their problem but an additional burden remains to be completely outside of the scope of both developers and their sponsors.

 

The Good.

The case that I presented as a ‘good’ one is not yet a real product (but neither is the one from the MIT). The concept of Mirror Table was proposed by Sean Wang from the Pratt Institute, as a part of design research project aimed to create a range of accessories to help people who suffer from memory loss caused by Alzheimer’s disease.

The Mirror Table help people to relearn simple tasks they start forgetting by mimicking the actions of other people that are seen in this ‘fictional mirror’. The idea is that a caring person sits opposite the cared-for so they can see each other through an open wooden frame, and complete activities like brushing teeth or spooning food together (you can read more about this project here).

It is as anti- (or even a-) technological concept as it could be. Not only there are no any ‘modern technologies’ used, there is not even a mirror here! It is as minimalistic and conceptual as the notorious Malevich’s Black Square:

Why is it of any interest, then?  I think it is because it beautifully illustrates the key principle of intervention to the ‘fabric of life’ with new technologies: No, one should come and disrupt the existing human activities and introduce the news ones, but instead blend with the existing ones and gradually amplify them.

In this particular case there is a certain existing practices when the care-givers assist the people in need, and this system merely enhance it, gently yet efficiently. Over time the really new practices may emerge, for sure, but we don’t drop them on poor us in one technological waterfall.

(And we’ve seen the examples of heavy techno-waterfalls in this mirror space, too. I know at least three projects that aim at introducing all kind of ‘smart mirrors’ with the same ‘retraining purposes’, including the ones with the embedded AI, of course.)

I have nothing against AI per se, or against any other new tech that emerges with an increasing speed (I have some tooth agains VR in its current format, but that’s another story.) What I am against of here is of bringing these marvellous pieces as stand-alone Holy_Grail_Gadgets that will magically change the world for better.

What I am for in this case is of course a very detailed and nuanced understanding of the existing ‘fabric of life’, that includes not only existing technological landscape but more importantly a landscape of social practices. It is these practices, routines, unremarkable everyday’s things that form an ‘infrastructure of life’.

The ‘innovation’ as it is presented today is too much focussed on the ‘figures’, not backgrounds, if I’d borrow the geshtaltist terminology.  We look too much at the trees, and not the forest that they are making together (or the savanna, in case of the picture below):

 

The Meaningful ?

In this context what could be an example of something in this direction, of ‘re-thinking the fabric of life’? I decided to bring an example MyFutures, the research project run by the TU Delft and a consortium of few other partners.

{A short disclaimer is due here: Summ( )n is not officially a part of this project although we have recently done (paid) job for the project, in form of a demo-session about our Future Probing method for the team – I wrote about this session here and you could also find more links to the project in this posting.

MyFutures has also recently published its first report with a summary of the main developments and research done so far, it’s available here).

But why I consider their approach meaningful ? (for the record, there are not any magnificent ‘new’ and ’empowering’ technologies in this project either; at least not yet, although they may all appear very quickly knowing that it is lead by the ‘Technical’ University of Delft).

The goal of this project is to help people to think about their futures better (in my view, an ultimately empowering task). But to do so, the project didn’t rush into development of the new, AI-driven (of course) and blockchain-based (because “everything is better with blockchain!”) system to guide us into the brave new world.

Not, the team start gathering information about the current practices of ‘thinking’ and ‘doing’ the futures, identifying the moment when this futuring happens, and understanding complex social practices that mediate the process. When doing so, the team could also spot a range of opportunities to enhance (may be even enchant) these practices, including with the use of whatever new (but already proven) technology that would do the job.

This ‘technological empowerment’ may come from different corners, and some of the things may be far, far from the contemporary buzzwords. And in any case, these will be (hopefully) not fragmented pieces of technologies but integral socio-technological transformation leading to the new everyday.

***

PS: Despite the above may look like a ranting, it is not. The workshop was very good anyway, on its own, and also because I met many new interesting people who fortunately share this mode of thinking. I may start coming to the Design Lab more often!

Leave a Reply

Your email address will not be published. Required fields are marked *