
Ateliersfrancochinois
Overview
-
Sectors Recruitment
-
Posted Jobs 0
-
Viewed 41
Company Description
Perplexity Lets you Try DeepSeek R1 without the Security Risk, however it’s Still Censored
Chinese startup DeepSeek AI and its open-source language models took over the news cycle this week. Besides being similar to models like Anthropic’s Claude and OpenAI’s o1, the designs have raised several issues about information privacy, security, and Chinese-government-enforced censorship within their training.
AI search platform Perplexity and AI assistant You.com have found a method around that, albeit with some restrictions.
Also: I checked DeepSeek’s R1 and V3 coding abilities – and we’re not all doomed (yet)
On Monday, Perplexity published on X that it now hosts DeepSeek R1. The complimentary plan gives users three Pro-level inquiries per day, which you could utilize with R1, however you’ll require the $20 each month Pro strategy to access it more than that.
DeepSeek R1 is now offered on Perplexity to support deep web research. There’s a brand-new Pro Search reasoning mode selector, in addition to OpenAI o1, with transparent chain of believed into model’s reasoning. We’re increasing the number of everyday uses for both totally free and paid as include more … pic.twitter.com/KIJWpPPJVN
In another post, the company validated that it hosts DeepSeek “in US/EU information centers – your information never ever leaves Western servers,” ensuring users that their data would be safe if using the open-source designs on Perplexity.
“None of your information goes to China,” Perplexity CEO Aravind Srinivas restated in a LinkedIn post.
Also: Apple scientists reveal the secret sauce behind DeepSeek AI
DeepSeek’s AI assistant, powered by both its V3 and R1 models, is available via internet browser or app– but those require communication with the company’s China-based servers, which produces a security threat. Users who download R1 and run it locally on their devices will avoid that issue, however still encounter censorship of particular topics figured out by the Chinese federal government, as it’s developed in by default.
As part of offering R1, Perplexity claimed it eliminated at least some of the censorship constructed into the design. Srinivas posted a screenshot on X of question results that acknowledge the president of Taiwan.
However, when I asked R1 about Tiananmen Square using Perplexity, the design refused to answer.
When I asked R1 if it is trained not to address certain questions figured out by the Chinese federal government, it reacted that it’s created to “focus on accurate information” and “prevent political commentary,” which its training “highlights neutrality in international affairs” and “cultural level of sensitivity.”
“We have removed the censorship weights on the model, so it should not act in this manner,” stated a Perplexity agent responding to ZDNET’s ask for remark, adding that they were looking into the concern.
Also: What to understand about DeepSeek AI, from cost claims to information privacy
You.com provides both V3 and R1, likewise only through its Pro tier, which is $15 each month (marked down from the usual $20) and with no complimentary questions. In addition to access to all the designs You.com uses, the Pro plan features file uploads of approximately 25MB per query, a 64k maximum context window, and access to research study and customized agents.
Bryan McCann, You.com cofounder and CTO, described in an e-mail to ZDNET that users can access R1 and V3 through the platform in three ways, all of which utilize “an unmodified, open source variation of the DeepSeek designs hosted totally within the United States to ensure user privacy.”
“The first, default way is to use these models within the context of our exclusive trust layer. This offers the designs access to public web sources, a bias towards citing those sources, and a disposition to respect those sources while generating reactions,” McCann continued. “The 2nd method is for users to shut off access to public web sources within their source controls or by utilizing the models as part of Custom Agents. This choice permits users to explore the models’ unique capabilities and behavior when not grounded in the public web. The 3rd way is for users to test the limitations of these designs as part of a Custom-made Agent by adding their own instructions, files, and sources.”
Also: The best open-source AI models: All your explained
McCann kept in mind that You.com compared DeepSeek designs’ responses based upon whether it had access to web sources. “We saw that the models’ reactions varied on numerous political subjects, sometimes refusing to address on specific issues when public web sources were not consisted of,” he explains.