Kate Weinberg: I think we should get to place where we have sexual education in general. Whether or not we get to a place where we have robots with great clitorises—which, that'd be great, too, as an additional tool. But I think so many of our sexual problems as a culture and a society results from that basic level of sex education missing.

Dorothy Howard: I'm also interested in [focusing] on LGBT communities and the use of technology. Often, people who aren't necessarily socially accepted in their day to day lives turn to online communities to really be themselves. To feel like, "OK, this is a space where I can find people and feel protected and I wouldn't be able to do that on the street." Just thinking about people with types of social trauma, as mentioned earlier, might feel more comfortable in chat rooms or being online interacting with other humans for any number of reasons. People who don't want or like to interact with other humans have another option.

H&F: Would this be more beneficial in a training wheels situation so then they can go out into the world and feel confident and comfortable navigating human relationships? Or would that be a potentially acceptable end-game situation? Like, someone's sexual preference could be AI.

Dorothy Howard: I think it's a really interesting question. A lot of the points we've mentioned have touched on this question of whether AI—is the responsible use of these technologies a question of not losing our humanity? Trying to keep humanity and keep human sociality because it's preferable relationships with machines? That's an open question.

Gareth stoneman: It's a should over a what would happen sort of situation. We're all pretty intelligent. We're sexually aware…so yes, these things should be used for the service of humanity. It could be used for good, it should be used to open people's minds and prepare relationships. But market forces are [powerful] so what will probably happen is we'll just get fuck bots… because that's what the marketplace will find a use [for]. In all these artificial intelligence conversations, you can have high ideals about it and what should be done. As it actually does happen, it's often very different from that because the technology is used to produce something very base level. It's simpler and easier to harness. So what will happen first? Probably not the high-minded things we're discussing here. I think all you can really do as part of the kind of people who are creating this technology is participate in advocacy that needs to happen to make sure the technology develops in a responsible and beneficial way. Advocacy to be doing the right thing when building these things.