Interruptions and Interaction design with Duncan Brumby

7 min read |

One day, I got an email saying Duncan Brumby was going to do a talk at our faculty. The email had a link to some of his research interests.. I googled him on YouTube and was interested enough to book a one-on-one with him to ask him some questions. I’ve edited our talk for clear and quick reading, so please treat it as fictional.

MB: You’ve talked a bit about how reinforcement learning is similar to old psychology. What’s that about?

DB: Yeah. So, like, reinforcement learning techniques [are] what we’re seeing in the current AI systems. Right? Deep reinforcement learning algorithms are basically a learning approach where they figure out a strategy that maximizes a reward function, right? And it’s got certain qualities to it: You’ve got to have a large space of options. You’ve got to have a very clear objective about what you’re trying to do and then you kind of train your technique to do that.

And we’ve applied this in our work. So working with my PhD supervisor Andrew Howes and also Antti Oulasvirta another colleague, they’re really pushing this idea of computational interaction design. I think within the HCI context, those types of techniques could also be used for thinking about modeling users, and also for thinking about designing interfaces. So my background was in cognitive science, there’s a lot of this kind of cognitive modeling work where you try to model what people are doing in space. You can do that using these reinforcement learning techniques. All it does is provide a theory and mechanism to identify behaviors that make sense, given the structure of the environment, right?

MB: I think I’ve heard something new in there about trying to apply those computational methods to aid in User Interface design or generally Interaction design.

DB: Yeah, so computational interaction design is now a sub domain at CHI. So I mean I definitely encourage you to look at the papers again publishing that area. So people like Antti Oulasvirta, Rod Murray Smith, these are people that have been leading in this area. So, a lot of it is thinking about how to do things like adaptive interfaces or think about layouts that optimize for a particular task [or] for a particular type of user.

MB: Okay, yeah, I really like that. Thank you. So you had talked about value-centric design which you liked it because it prioritised the user’s choice. But is the user Always right? I think that there would be some responsibility on the designer because if they are going to put something out there, then they will have to interface with some amount of worldview to make it look a certain way. What do you say about that?

DB: Yeah, again it’s trade-offs. I think It’s the classic trade-off between Freedom and Control, isn’t it?

So I guess what you’re saying is that, there should be a default option which seems sensible but to link back to your previous point, though, this whole idea of adaptive computational interaction design says, “Let’s take this idea. Seriously, we should have different interfaces for different people. For their preferences, styles of work, [etc]” If these advances in AI are to be believed, I think there is potential for that, right? It allows for more customization and my point in the talk was to highlight that thinking [for example] about the phone call UI, it’s a very clear example where a designer has made a very explicit choice to prioritize one task over another through its design. Personally, that was a bad choice, but yeah, you’re right.

But that’s inherent to design work, isn’t it? The designers have to make those big calls. It’s easy to criticize afterwards. And it’s good to see that they have now changed. What makes these things so interesting and what makes Human-Computer Interaction so interesting is our use of a tool which is built and made and is constantly changing. We’ve basically got an amazing thing here that we can kind of continually evolve to suit our needs. Whether that be through the judgments of a good designer or through some kind of learning algorithm that’ll figure out what we need best to support us in our days. I think it’s an exciting area to do research and to work in because the technology is constantly changing and so, then, what we do with it is changing.

MB: That’s very good. It’s giving me a lot to think about after here. What about ‘Urgent versus Important’. Who decides and does it change?

DB: That was what we did on email, you know, that was a fun paper. We worked on an email and how people triage and manage their inboxes. That was a fun thing because I’m interested in these kinds of things from a methodological point of view. How do you study people’s behavior on email? You can go talk to them, you can go do interviews and they can tell you some stuff. You can bring them into a lab where you can get them to, you know, use a fake data set and say ‘work with this inbox’.

We had something that was kind of somewhere in between that. Well we did this in the wild. We sent people fake emails, but to their regular email account, and then gave them money for how they responded to them and we changed each of those emails. And I thought that was kind of interesting because it allowed us to get on top of the gap between subjective and objective. We did give this an objective function that said; “Okay, this email you need to respond within this timeframe is worth this much money.” And we were able to systematically vary these things and see how it then affects people’s response times. In reality, I think these people make these kind of judgments in their head, so I don’t think you’re ever going to get a sense of [reality], but I think it does allow us to start thinking about these things which are things dimensions like importance (so that’s value) and urgency (how quickly a response is needed) [which] allowed us to start pulling apart those things.

So we could see how the urgency and the importance of responding to a message varied, how quickly people were to respond to it. And what we found was that urgency (and this is very practical advice from that work), just say[ing] urgent, you’re more likely to get a response from people. People prioritize urgent things over important things and you see that over and over again.

MB: Do you think there’s an opportunity for AI to help? Maybe tracking or the insights generation. What do you think potentially?

DB: Yeah, that’d be the question. Whether these chatbots could be useful, right? I know some colleagues have been using chatbots to get people to talk about what they’re doing, keeping track. I mean that would be an interesting area, for more research to be done. The opportunities for chatbots in personal task management, particularly for those insights. You know, “I’m going to do this. I’m thinking of doing this. Or this. What should I do next?” The trouble that people fall into is they have unrealistic expectations about how much they can get done, and what’s the best point to get stuff done as well?

MB: Oh, yes. So, how much to get done and what is next? What is a good time to get work done? So it’s almost like a learning process. Scheduling tasks of the day. Aside from that area, maybe generally in HCI. Aside from computational processes being integrated in designing interactions, where else do you see AI maybe having a useful impact; whether in research or after development or something.

DB: Oh, yeah, massive impact. I mean, it’s riding grant proposals, writing papers, analyzing data, helping with peer review. The current generative AI is particularly good for fairly standardized generic [things] like reference letters, accept / reject decisions. There’s a whole class of really quite effortful work which is simultaneously, quite templated, but still quite specific as well. I’ll send my bullet points to my generative AI to write the full letter. You’ll receive the full letter then you’ll then you’ll pass it back to your generative AI to then get the bullet points back out the other end. So yeah, there is this kind of theater of work that like we’re going to go through because I think what generative AI is going to do is just expand everything. But then we don’t have the capacity to read all that so we would then want to shrink it again. It’s going to help us expand our thoughts to a socially acceptable length, but it also can help us make sense of a very long-winded [conversation].

You can feed [it] this kind of script (of the conversation recording I was making) and be like, “okay what are the three things this guy was on about”, right? And it’ll be able to give you that in a fairly simple way. Much shorter than I’m saying.

MB: But the actual conversation is at least humorous, so that’s good. One last one. You mentioned retaining human brilliance. What does that look like for you? Keeping the human in the loop.

DB: Ownership of decisions, I think. And that’s actually frustrating, as many people don’t want to do that. The people don’t want to make a decision. But yeah, it’s actually like, “So this is what I think and this is why.”

MB: I listened to a Defense Industry expert who told us they had trained an AI to do what professional hackers do in 40 minutes in two minutes. And so there’s no use for human beings there.

DB: Yeah. Seems like there’s a bit of a battle. Because if the AI gets better, then you need less people but do we want more people in or people alongside even if it’s clearly better?

MB: What do you think?

DB: Yeah, these types of situations. We spent a lot of time talking about this at [an] actual seminar a couple weeks ago. They called it ‘Lights out Factory’ Because there’s no lights that need to be on because there’s no people. The entire Factory is run by robots. If I remember, her point (the one that gave the talk) was that it was going to do like a yin and a yang kind of thing. It’ll be like, we’ll get to high levels of automation, which will then allow us to develop our craft skills, all this kind of stuff will slow work, hobbies [,etc]. And then, eventually we’ll get so poor that we’ll like bring down the machines and want to get back in.

So yeah, this is kind of like fantasy, right? You could imagine this being kind of like a Sci-Fi novel, right? You know you go through this kind of cyclical pattern of like more or less automation over time. It’s interesting. Oscillating back and forth as we get to a point. But you could already see this though, right? You can already see the value in certain craft hobbies have really come back. You know, I mean look during the pandemic, people [were] getting into baking bread like why do that right? Bread’s cheap. But there’s a skill and there’s a quality that comes from the kind of artisanal [touch]. And it’s crazy expensive as well. If you go by London,you want to go get some nice cake, or good coffee, it’s gonna cost you a fortune, right? Salary is going to go up for people that do practical work. That can’t easily be automated.

MB: Oh yes. That might be interesting.

DB: No, the world always changes, that’s for sure.

MB: We’ll see where it goes next. Yeah, but yeah. Thanks for the talk!


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *