Beki Grinter

Posts Tagged ‘human-robot interaction’

Today I friended a robot on Facebook

In research, social media on October 13, 2010 at 11:51 am

A few days ago I received a friend request from Kuka Youbot (not a page, group, and you don’t fan it, it has an account).

Its the first time I’ve had to decide whether to friend a machine on facebook, and I was not sure what to do.

I do not friend people I do not know on Facebook. I have declined requests. Before I decline requests I typically review the person’s information, usually looking for shared friends. That sometimes causes me to realise that there is a possibility that I actually do know the person. This has been most pronounced with people who have changed their names, and this is most common among my female acquaintances who have subsequently married. In just a few cases I have asked our potentially mutual friends who a particular person is, I am constantly impressed how good some people are at recognising a person from an image of them that is twenty years older than when I knew the person face-to-face.

Almost all my Facebook friends are people that I have at one point in my life known as face-to-face friends. Some of my Facebook friends go back as far as kindergarten through all the school and employment affiliations that I’ve had. There are exceptions. I am friends with one English ex-patriate that followed a very similar path (source to destination). I am friends with a family historian who has identified that part of his family and part of mine are connected. We are distant relatives.

I have a policy regarding students. I am happy to friend any graduate/undergraduate who asks me, but I will not ask them. This reflects the power-relation of the environment that we share. I am happy if they feel comfortable being connected to me and reading the random things I post but I can also imagine a degree of discomfort being “forced” to connect to a faculty member when they would prefer to keep what they post to Facebook private from faculty.

I decided to friend the robot. I decided to because it has mutual friends, two of my colleagues in the School of Interactive Computing and the Robotics and Intelligent Machines center. I decided that if the Youbot was something that they would friend, then I would too. They became a recommendation of sorts for making a decision about friending the robot. And since they’re both roboticists I am sure that they have good criteria for selecting which robots to friend and which not.

And it likes that I’ve friended it. How friendly. It also wants to know whether I’m going to IROS and wishes me a happy weekend.

A Case for Feminist Studies of Technology

In computer science, empirical, research, women on March 1, 2010 at 12:08 pm

My colleague Henrik, who tracks many things Robotic, recently blogged about a man who has invented a (non walking) robotic female companion.

On reading the Daily Mail report, I think that this is a good argument for feminist studies of technology.

The paper reports Le Trung, the robot’s inventor as saying:

Aiko is like any woman, she enjoys getting new clothes,’ he said.
‘I loved buying them for her too.’

and later the paper reports that…

Aiko, whose age is ‘in her early 20’s’, is 5ft tall and has a perfect 32, 23, 33 figure

As a woman I find these statements very disturbing. What he has built, what he calls a woman, exhibits very strong stereotypes. Why, for example, doesn’t his robot share his interests, programming, problem solving, mechanics? She likes new clothes instead.

Her figure reinforces the worst of stereotypes about body image. With a 23 inch waistline, it was reported that Victoria Beckman was too slim. They add that that’s 11inches thinner than the average British women’s waist and it’s a U.S. size 0. It’s not average. But it is an image of woman that it is well-recognized has contributed to some people’s struggle with their body image  (Peggy Orenstein’s book Schoolgirls, is one good but depressing read on this topic).

But Aiko is always helpful and never complains. She is the perfect woman to have around at Christmas.

And what’s the implication here? One of subservience. An ideal woman never complains and helps out. Perhaps I’m being overly critical, but it is a design choice in this case. Someone thought it would be better to build a “female” robot this way.

For the last twenty years feminist scholars have been examining the expectations about women that are built into technology, and into the media that surrounds those technologies (and implicitly or explicitly situates women as having particular engagements with or to those machines, and ones that are different from male interactions). This is of course alongside a whole lot of other things… If anyone ever wondered why that research matters (and I am *sure* that people have), I would encourage them to consider this machine most carefully. What are the implications of designing robots that stereotype women, and in so doing continue to harm and repress as well as reject the real experience of womanhood? What role do these stereotypes, and their propagation through technological forms, do to continue to perpetuate notions of womanhood that are, for many *real* woman not accurate and potentially oppressive?

By building a robotic companion, he is also creating the opportunity to distance to remove himself from an actual relationship with a real woman that perhaps he would find educational with respect to the above. And this in addition to potentially exacerbating the above, raises other questions. What is the science of understanding what it means to design robots that allow people to retreat from human interactions, to replace them instead with robotic interactions? What will it mean if people chose to live with robots over living with people. What will the consequences for society be?

This is just the first experiment, but it’s one that raises important questions.

Revisiting Visions

In academia, academic management, C@tM, discipline, HCI, ICT4D, research, social media, wellness informatics on June 3, 2009 at 4:23 pm

So who knew that a blog would encourage me to think harder.  Almost certainly some researchers I know, and I apologise to them profusely, but it’s still a new experience for me.

I wrote about vision (and strategy, but we never mention the latter without mentioning tactics). I said that I felt it didn’t come naturally to me, that I was more instinctual.  Beginning with instinct… here are the things that seem important to me.

  1. Wellness Informatics. I recently wrote about my version of this idea but its something I’ve been thinking about for about a year. The gist is that  health informatics (or more recently biomedical informatics) is largely (not completely, largely) focused on a medical response to health issues. But health and wellness are important partners. Wellness takes place in a community, and possibly without reference or interaction with the medical establishment. And what really triggered this idea for me was that in some communities, the medical establishment was a complex interaction. It depends from where you start.
  2. Human Network Interaction. I have a long standing interest in making the Internet/home network a better user experience. What the popularity of the Internet has successfully proved is that a network architecture/protocols designed for technical specialists is miserable for end-users at home, not to mention technically trained people at home. What Keith Edwards and Nick Feamster will tell you, and I agree, is that this situation is probably resolved when HCI moves “down the stack” i.e., when networking and HCI are co-designed to meet the unique constraints and opportunities presented by unmanaged environments.
  3. Narratives of Reach in ICTs. Paul Aoki first explained the importance of cross-cultural flows in computing. He made a compelling case that religious organizations are using ICTs not just to expand out of Westernized countries to emerging nations but vica versa. Research and conversations later, I understand that religion is a fascinating place to examine how emerging nations are using Information and Communications Technologies (ICTs) to expand into the West. One thing that’s important about this for me is that it turns a traditional narrative on its head. I participate in communities where ICTs are helping Western corporations expand their reach into emerging nations (heck I worked on such a project).
  4. Distributed Intelligence in Human-Robot Interaction. Robots are a fascinating computing platform (in so many ways). In the last few years a quiet revolution has occurred. Robots have always been a part of our collective narrative (e.g, in science fiction) but in the last few years they’ve been quietly moving out of our imagination, away from our screens, and into our homes. That’s a change… I think that this puts pressure on robotics, and on human-robot interaction to devise modalities by which we may interact. The focus seems, at least to me, on making robots more smart. And that’s a good thing, but I think it scopes the design space in limiting ways. Specifically, I think that if the design space accounts for how/why people want to engage with robots then it opens up the design space to a type of human-robot distribution of intelligence. And, I think that’s the relationship people actually seek with robots.

Each of these is clearly a product of the interactions I have with the students I work with. There are some interesting cross-cutting themes though.

One is instinctual: about looking at problems the other way around. Arguments frequently have a temporal-linear narrative. The last two sort of exhibit this property. Narratives about the expansion of ICTs have them reaching out from the rich, urban, industrialized, to the poor, rural, pre-industrialized. But it can be and is also the other way around. Arguments about intelligence dominate the rhetorics of robotics design, but what about arguments that propose emotional engagement. I’m not saying that dominant arguments/narratives don’t have their place, but I am saying that considering the possibility of what’s not there is well mind-opening.

Another example: decomposition. Software Engineers spend a significant amount of time focused on decomposing a problem into a series of modules that can be worked on individually. But if you think of software as a linear-temporal activity then the process of reassembling them, of creating the whole from the some of the parts, becomes much more visible. And I once argued that it was that process, the process of recomposition, that was why software engineering was so human-centered, it was the need to be able to put things together that drove a significant number of the collaborations required to keep all the individually separated parts in alignment so that they would fit back together again.

Funny I always think that I didn’t pay much/enough attention in John L. King’s class about argument morphology. Perhaps that’s not true. I thought my colleague LP was the one who was paying attention.

Another theme which I hardly know how to express is to do with considering the extremes. I’m not the first to think of that, nor would I claim to be. I have colleagues who have research that takes place in countries like Liberia, or among Atlanta’s urban homeless. We have called this computing at the margins. I feel that a lot of the projects I’m involved in have a feel of take something that works somewhere, in a particular context, and then watch it fail or change in a different context. That certainly describes the Home Networking research. I think it also describes the focus on religion. Religion is ubiquitous and a site for many interesting and diverse uses of ICTs, but it’s not a central topic in HCI. Well it wasn’t, we’re working on its inclusion. But why? Because it’s ubiquitous, because religious values have long shaped the appropriation and rejection of technologies, as well as been some of the major reasons driving adoption. And most importantly because it allows you to look at non-religious use in a different light. It is a new frame, a new perspective from which to re-examine what otherwise gets lost because it is assumed. It also captures my interest in Wellness Informatics. I’m far more interested in the cases where the relationship between the community and the medical establishment and its knowledge is not straightforward, because it reveals the all important hierarchies.

But, what pulls all this together? That’s the question I have going forward.

What do I do in the office, part II

In academia, computer science, discipline, HCI, research on December 20, 2008 at 3:00 pm

I didn’t say much about research did I?

(An aside: last semester was one where a variety of institutional things dominated my attention, not least the tenure process. The day the faculty all meet to discuss you feels beyond weird. So perhaps I have been focused elsewhere).

So, research. I do some. Well more accurately my wonderful students do research, and I try to help them do that. Wow, that’s an amazingly rewarding experience. This year, they have all individually been very successful, which leads to a feeling that can not be described.

My interests lie, as I have said, at the intersection of computing and humanity. I’m not alone, but what I think I bring to the table is that I am deeply interested not just in the new contexts in which computing finds itself, but also in understanding how a human-centered perspective can shed light on problems in traditional fields of Computer Science.

To describe this, I used to use the phrase HCI goes Down and Out, but it was suggested that I switch off of that path. It was suggested to me that playfulness (while an endearing part of the British character) was not perhaps appropriate for the tenure statement (darn, I thought a bit of humour would brighten the readers day, especially since they were going to spend some hours with it all).

My research has been deeply influenced by the truly amazing career I’ve had 🙂 To explain. I’ve been fortunate enough to work in some amazing places and for some amazing people. I began my career at Bell Labs. Bell Labs has made some amazing contributions to Science generally, including Computer Science. I was managed by, and worked with, people who changed the field. Bell Labs was a place where people came to visit too, so I met a range of people. Collectively these people changed the fields of Compilers, Operating Systems, Algorithms, Software Engineering, … and then there was the place, and all that it stood for, including the history of invention that was things like the transistor…

I then moved to Xerox PARC. And to its Computer Science Lab. And that was not short on legacy in Computing. Indeed, I now know where original Ethernet still remains in place… Weiser’s pads were on the walls. And there were the people again, and the people who came to visit.

But, if I learned one thing from this experience. Research knows no disciplinary boundaries. Good research is. That’s the common experience. All these people who contributed to computing seemed to care about solving hard problems. Of course they all had disciplines that they contributed too, but I am struck by their focus on research.

So, as I think about research, I am going to try and not get mired in disciplinary debates, get caught in the nets of disciplinary boundaries, … but stay focused on the prize, the resolution of hard problems, hopefully with and because of the work that I do with my students.