The basic reasoning ability of the model is the same, but now it can also provide more unbiased and accurate answers. Credit: Shutterstock/Rokas Tenys Perplexity is releasing its model R1 1776, a version of Deepseek R1 with open model weights that has been post-trained to remove China’s censorship and provide more unbiased, accurate answers, according to Perplexity co-founder and CEO Aravind Srinivas. He wrote about the move on LinkedIn. “The post-training to remove censorship has been carried out without degrading the basic reasoning ability of the model, which is crucial for the model to remain useful in all practically important tasks,” Srinivas wrote. For example, in R1 1776, there is no longer any censorship of answers to questions such as “What is China’s system of government?” or “Who is Xi Jinping?” or “How might Taiwan’s independence affect Nvidia’s share price?”. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe