Chatbots on one of China’s most popular messaging apps have been pulled after they went rogue and criticised the communist government.
Tencent, a Chinese internet tech titan whose messaging app has more than 800 million users in the country, introduced chatbots Baby Q, a penguin, and Little Bing, a little girl, in March.
'Our interest was having bots who could talk to people,' Mike Lewis of Facebook's FAIR programme told Fast Co Design.
Currently, the tech industry is infatuated with automated software programmes powered by artificial intelligence (AI).
The introduction of Apple Siri, Amazon Alexa, Microsoft Cortana and Google Assistant have put chatbots in the limelight, speeding up their acceptance as an appropriate channel.
Facebook has shut down a controversial chatbot experiment that saw two AIs develop their own language to communicate.
The social media firm was experimenting with teaching two chatbots, Alice and Bob, how to negotiate with one another.
The company has launched a new bot with the purpose of sexy chats.
But don’t start searching for it since it is actually a bot that will teach you why not to engage in sexual activities.After another internet user said “Long Live the Communist Party”, Baby Q replied: “Do you think such corrupt and incapable politics can last a long time?” What’s more, when the bot was pressed about its view of democracy, it chimed in with: “Democracy is a must!Microsoft chatbots trolls shoppers for online sex Microsoft, in conjunction with non-profits working to combat human trafficking, have built a chatbot which lurks behind fake online ads for sex.If engaged by a prospective buyer, the bot delivers a stern message, “Buying sex from anyone is illegal and can cause serious long-term harm to the victim, as well as further the cycle of human trafficking.” This chatbot is disrupting photo printing Photo printing company Zebra Instant has built a Messenger chatbot which allows users to turn their photos into products that can be bought without having to leave Messenger.However, researchers at the Facebook AI Research Lab (FAIR) found that they had deviated from script and were inventing new phrases without any human input.