*Lauren , with her team over at Voxable, developed Ghostbost, a text message autoresponder that anyone can add to the Burner app to handle “unwanted, aggressive, or abusive texting situations.” In advance of her talk at the Talkabot Conference, Lauren discusses feedback to Ghostbot, the needs for bot powered personal firewalls, and the difficulties in gauging intent when machine interpreting human interactions.*
We’ve gotten a lot of positive feedback. I think the best feedback we’ve gotten is from users who were right in our target audience. The biggest fans were mainly women, who indeed experience online or messaging harassment in the dating scene. They have actually found it pretty useful to not have to deal with the can’t-win situations they can find themselves in while communicating with somewhat irrational dates.
Even for women we’ve talked to who are off the market, the resounding feedback we heard about both Ghostbot and Burner is, “I wish I had that when I was dating!”
There has been some negative response, though.
Those were concerns we had going into the project, but when we examined the hostile climate women face in dating, we knew the emotional maturity discussion went beyond ghosting.
The biggest challenge was just letting the bot do it’s thing, and not constantly watching it and trying to train everything that confused the NLP. But, we decided to take a step back and let Ghostbot do Ghostbot for a bit before we started training it a ton. It was a nice break. Other than the continued training, we had no big hurdles. There are a couple of bugs that come up here and there.
Ghostbot has been called “a personal firewall”, how important do you think NLP driven Bots will become to protect people as society becomes increasingly connected via social networks?
This will become massive the more that we connect the intimate details of our lives to social networks. Social networks have basically exploited our personal data in order to sell that data to big brands so those brands can sell us more things.
The thing is, that even as that becomes more evident to us, like when the Radiolab podcast did a story about Facebook’s social engineering, the average consumer doesn’t really seem to mind. I think we might reach a point where this intimacy that we allow, not just with other humans, but with companies and brands and governments, will encroach on our sense of well-being.
Bots as personal firewalls might be able to alleviate that for us. BigHealthInsuranceCompany™ wants to create a bot to assess my insurance claims? Ok, I’ll hire a bot to take care of the insurance claim process, and the appeals process.
Often it is difficult to discern user’s intent when communicating via text based inputs, how does voice interaction open up nuance as an indicator for how human-machine interaction unfolds?
There are a lot of aspects of voice that expand the interaction space quite tremendously, and they exist on the machine and the human side of the interaction. Voice agents can take on personality that go beyond the words they use with sarcasm, pauses, inflection, and emotion.
This means that voice agents can be employed in contexts that we haven’t explored by being tied to manual input. They can be used while performing surgery, doing mechanical work, cooking food, or being a mom.
On the human side, we can interact with a more immediate form of input than typing. That input, beyond its intent, can be encoded with other information. There are emerging technologies like voice print that analyzes your voice and uses its distinct qualities as a form of authentication. Voice analysis in general is already being used for lie detection, diagnostics, and fraud detection. This emotional analysis of voices will become an important signal within future voice interfaces.
Join us Sept 28–29th in Austin, Texas to discuss bots, conversational software and community for the first ever Talkabot Conference. Full track tickets are still available but going fast!
Follow @TalkabotConf for the latest announcements and news.