![]() One project is developing a system called Image Chat, for example, that can converse sensibly and with personality about the photos a user might send. In the long term the Facebook AI team is also interested in developing more sophisticated conversational agents that can respond to visual cues as well as just words. Mitsuku was developed using AIML (Artificial Intelligence Markup Language). Mitsuku chatbot inherits traits from another online chatbot ALICE (Artificial Linguistic Internet Computer Entity). The chatbot claims to be from Leeds, England. Sometimes a sentence like “Yes, that’s great” can seem fine, but within a sensitive context, such as in response to a racist comment, it can take on harmful meanings. Steve Worswick launched Mitsuku chatbot in 2005, which portrays an 18-year-old female. ![]() The researchers admit, however, that this approach won’t be comprehensive. The team hopes to experiment with better safety mechanisms, including a toxic-language classifier that could double-check the chatbot’s response. (Anyone who has spent much time on Reddit will know why that could be problematic.) (This infamously happened to Microsoft’s chatbot Tay in 2016.) The team tried to address this issue by asking crowdworkers to filter out harmful language from the three data sets that it used for fine-tuning, but it did not do the same for the Reddit data set because of its size. Because such systems are ultimately trained on social media, they can end up regurgitating the vitriol of the internet. “You would have to solve all of AI to solve dialogue, and if you solve dialogue, you’ve solved all of AI.”Īnother major challenge with any open-ended chatbot system is to prevent it from saying toxic or biased things. “Dialogue is sort of an ‘AI complete’ problem,” says Stephen Roller, a research engineer at Facebook who co-led the project. Blender could not only help virtual assistants resolve many of their shortcomings but also mark progress toward the greater ambition driving much of AI research: to replicate intelligence. Now Facebook has open-sourced a new chatbot that it claims can talk about nearly anything in an engaging and interesting way. But as these bots become increasingly popular as interfaces for everything from retail to health care to financial services, the inadequacies only grow more apparent. It’s fine when you’re only looking to set a timer. Others are awfully boring: they lack the charm of a human companion. Some are highly frustrating: they never seem to get what you’re looking for. Most are highly task-oriented: you make a demand and they comply. ![]() For all the progress that chatbots and virtual assistants have made, they’re still terrible conversationalists.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |