![]() ![]()
If by chatting online Tay can help Microsoft figure out how to use AI to recognize trolling, racism, and generally awful people, perhaps she can eventually come up with better ways to respond. Peter Lee, the vice president of Microsoft research, said on Friday that the company was deeply sorry for the unintended offensive and hurtful tweets from Tay. Really, what happened provides an excellent learning opportunity if Microsoft wants to build AI that’s as intelligent as possible. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. Tay’s training set consisted of a bunch of nasty tweets, so her artificial brain slurped them up and she spit out what seemed like proper rejoinders. This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Conversational AI is really tricky, and it learns by being trained on lots of data. 12, 2023 When a chatbot called ChatGPT hit the internet late last year, executives at a number of Silicon Valley companies worried they were suddenly dealing with new artificial intelligence. The behavior Tay reacted to-and the reactions she gave-should surprise nobody at Microsoft. That way, she could refuse to respond to certain words (like “Holocaust” or “genocide”), or respond with a canned comment like “I don’t know anything about that.” She also should have been prevented from repeating comments, which seems to have been what caused some of the trouble.īut people act horribly online all the time. Microsofts new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions. The AI chatbot Tay is a machine learning project. ![]() #Microsoft chatbot tweets Offline#The goal for Microsoft was to experiment with conversational understanding by analyzing tweets from. (Reuters) - Tay, Microsoft Corp’s so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of racist. The troublesome cyber-teen has since been taken offline for ‘upgrades’ and Microsoft has deleted some of her more offensive tweets. #Microsoft chatbot tweets software#As artificial-intelligence expert Azeem Azhar told Business Insider, Microsoft’s Technology and Research and Bing teams, who are behind Tay, should have put some filters on her from the start. A Microsoft 'chatbot' designed to converse like a teenage girl was grounded on Thursday after its artificial intelligence software was coaxed into firing off hateful, racist comments online. In 2016, Microsoft launched a Twitter chatbot named Tay. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |