Better definitions of AI with Prof. Richard Harper.

6 min read |

You, the reader: Who is Prof. Richard Harper?

His Bio: Richard Harper is Professor of Computer Science and formerly Director of the Institute for Social Futures at Lancaster University. He is a Fellow of the IET, Fellow of the SIG-CHI Academy of the ACM, Fellow of the Royal Society of Arts, and Visiting Professor at the University of Swansea, Wales. His research is primarily in Human Computer Interaction, though it also includes social and philosophical perspectives on the role of computing in society. He has written 19 books, including the ‘Myth of the Paperless Office’ (MIT: 2003); ‘Choice: the sciences of reason in the Twenty First Century’ (Polity: 2016); ‘Trust, Computing and Society’ (Ed. CUP, 2015); and ‘Skyping the Family’ (Ed. Harper, et al,  John Benjamin).‘The Shape of Thought: reasoning in the age of AI’’ will be published in January 2025. Prior to joining Lancaster, he worked at Microsoft Research Cambridge, at Xerox EuroPARC, and at the Digital World Research Centre, at the University of Surrey. He has also worked in various start-ups and research consultancies. He lives in Cambridge with his wife and a ginger cat, his three children having long left home.  

(he also has this book The Shape of Thought: Reasoning in the Age of AI which I haven’t [yet?] read)

Boyd: Is there benefit in looking back to the previous industrial revolutions to estimate what AI will look like in full utilisation?

Richard: I’m not sure that there is much value. Partly because, people are assuming that AI is a technology which will automate and and therefore put people out of jobs just like the various stages of automation say in the Industrial Revolution. But as it currently stands, I don’t think AI does much automation. I don’t think it has much power. I think AI does lots of things where the benefits might be automating, for example in sensor operation in a car, which makes the car more efficient. But, that just means my desire for a car increases. So I’m buying more cars, not buying less cars, and I’m not travelling less because it’s (now) more efficient for AI. 

So the problem for me is that I don’t know what AI is, such that I can say it would have this impact. And if AI is about automating, I’m sceptical about that because while it does automate, it’s also talking about intelligence. One of the things you might say about intelligence is that intelligence labels those human acts which are 

  • unique and 
  • only undertaken in a novel situation, 

but that’s what AI can’t do. So I don’t know what the thing is that is called AI.

Boyd: I think that’s good. I remember you had mentioned that you’re thinking about calling it narrow AI for the meantime. because of its inability to do exactly what it claims to be. 

I also was wondering, What do you think about using human-to-human interfaces to implement human-to-nonhuman interactions? For example, if I’m on WhatsApp, I chat and then she responds. And yet, if I open the ChatGPT app, it looks and behaves EXACTLY the same. I “chat” and then it “responds”. So I’m asking about that design philosophy where they get things that humans to humans use and they do humans to robots. I don’t know if you have any thoughts about how that looks like or what that is, especially on the HCI side. 

Richard: Well, I mean you can ask Jackie this, and Jackie can ask Abby. And Abby will tell you that the reason why ChatGPT’s interface looks like that is because it tested well. And what testing means is not because it’s been researched by scientists or by HCI specialists! They’ve hired a bunch of folks in San Jose, given them $50 and said “which do you prefer, this or that?” And people will naturally say, “that looks more friendly, looks more cute, that’s a bit like WhatsApp, well that’s cool”. And then the company will say, “oh well, we’ll do that then!” There’s no research there. It’s not a sense of actually thinking it through. “It’s cute, looks good, let’s make it that way”. The fact that people like you are saying, “well this is a bit muddling, so my messages to my wife look like messages to ChatGPT”, and this is exactly what I’m talking about! Isn’t that a muddle? It’s just not helpful, is it? 

Boyd: You had mentioned a bit about the personification of AI, these narrow AI. The first-person responses and that kind of thing. It seems to me like it’s selling an idea rather than being a tool.

Richard: And the idea is something magical. “It’s like a person…” No, it’s not like a person, it’s just a machine pretending by using first-person! It’s really insulting. It’s a fraud and it should be held out to be a fraud. Now, I think you can say to the user, “the system’s gonna answer this in the first-person because we think it might amuse you.” 

And you know, Google have this thing where they’ll convert your text into two people doing a podcast and it’s fun. It’s just fun. Is it two people? Do I think that’s artificial intelligence? No, I just think it’s a game. That’s cool. I like it. It’s totally useless. I wouldn’t use it for any syntactical purpose, but it’s fun. Yeah.

Boyd: Does Work have a personal aspect to it? Music versus Secretarial?  So for example, we have AI that generates music versus normal actual musicians and we also have AI that does secretarial work versus normal actual secretaries. Is there a distinction between types of work that have a personal aspect to them? Or does all work look the same? 

Richard: No, there’s lots of distinctions. And a good HCI researcher should try to identify what those distinctions might be. Because it is to the benefit of everyone. You might distinguish some administrative work which should be automated, because basically it could be. And that work which is more complex, more improvised? Well, that shouldn’t be automated, you can’t automate it. You need to find out where those distinctions are. 

Boyd: What about the thoughts on the goals of the HCI makers? Because you had mentioned that the HCI maker (the person who is creating that interface between the user and the system) has to have some goals with them or has to understand what is going on. Do they come with their own intentions? 

Richard: Your goal might be to support creativity, support human connection, or to give people tools for better understanding complex information. All those are different goals, and then you have to come up with the grammar of action for the interaction with the tools that might help you do that, and the grammar action will be different for each of those different scenarios. And then the kind of abstractions that you might use for that might be different too. 

Boyd: Following up on that; should the HCI maker have some say in the result? For example if my say is monetary, societal, or religious, should I have some say in that process of making something for the user, or should I just build the libraries?

Richard: I think you have to have some say but more that say might depend on what your intentions are. For example, when researchers were working with Xerox, they agreed, let’s make documents our goal. But there are many other things that they could’ve designed computers as tools for. 

I think you’re kind of suggesting that an HCI researcher might impose their goals on users. Well, that’s just a bad HCI researcher. A HCI researcher should recognize all the possible goals that are available and then map those to which different user groups want them, and then map them to technology, and then make a decision as to which ones you’re going to go for. 

Someone near us: Some publicly available LLMs are more human-like. What do you think of that?

Richard: I don’t find it appealing that they’re more human. I’d rather them telling me that they’re not humans and they’re not behaving in human-like ways. They’re not first-person, and I wish they wouldn’t have a human voice. It’s just annoying. 

Someone near us: Do you intend an AI to be like a human, or more like a human assistant? 

Richard: No, I think AI is a label for lots of different tools. Some of the tools would be like an assistant, and some of the tools would be just an automaton doing things for me. And sometimes I think the AI tools can make me work harder. So it’s not one thing, that’s the trouble, and LMMs kind of offer one thing, which is a bunch of words. Well, they might also say, what you’ve asked is a very difficult question, the best way of answering it is for you to go and read a book. Don’t ask me. 

Boyd: Thanks for the talk!


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *