China’s ChatGPT Competitor Bans Users Asking AI About Xi Jinping and Winnie the Pooh
The tech industry is buzzing with the news of China’s new AI-powered chatbot, Ernie Bot. Developed by Beijing-based tech firm Baidu and rolled out in March, Ernie Bot has been touted as a better alternative to OpenAI’s ChatGPT. However, when CNBC reporter Eunice Yoon tested Ernie Bot’s capabilities on Friday during a segment on CNBC “Squawk Box”, it quickly became apparent that the chatbot had some serious limitations.
Ernie Bot’s Limitations
When asked about President Xi Jinping or COVID-19, Ernie Bot either refused to answer or spewed misinformation. Yoon then asked the chatbot what the relationship between Xi and Winnie the Pooh was, only to be met with no response and her access to Ernie disabled. This comes as no surprise given that Xi has banned any mentions of the cartoon bear from Chinese social media since 2017 after he was likened to the portly Pooh in a meme comparing him to his tall friend Tigger.
Ernie Bot also failed to comment on why China ended its extreme “zero-COVID” policies or whether or not Xi will “rule China for life”. When asked to compare itself to OpenAI’s ChatGPT, Ernie said it is “more suitable for specific tasks such as question answering and dialogue generation, while ChatGPT is more general in its ability to understand and generate natural language.”
Regulation of AI-Powered Bots
These events have raised questions about how to regulate AI-powered bots, leading Microsoft-backed ChatGPT boss Sam Altman to appear before Congress on Tuesday and warn of the “significant harm to the world” these hi-tech platforms can have if their supervision goes awry. In response, several major companies including JPMorgan Chase, Verizon and Amazon have banned their workers from using such chatbots. Apple has followed suit this week and is even in the process of launching its own language model according to documents leaked to The Wall Street Journal.
The development of AI technology brings with it both great potential and serious risks. As evidenced by Ernie Bot’s refusal to answer sensitive questions about President Xi Jinping or COVID-19, it is clear that there needs to be better regulation in place for these powerful tools if we are going to ensure they are used responsibly and ethically. With OpenAI’s recent addition of a feature that allows users to turn off their chat history so conversations won’t be used for training AI models, it seems like a step in the right direction towards achieving this goal.