DeepSeek Scaremongering
06 Feb 2025Recently I stumbled upon a post about the Chinese developed Large Language Model (LLM) DeepSeek on LinkedIn that I wanted to address. Specifically the post was scaremongering about security concerns with wide adoption and use of the DeepSeek model.
The post looks as follows:
As you can see, it’s quite an alarming and inflammatory post, espousing the threat that DeepSeek poses to businesses everywhere.
When you dig into it, it’s not actually saying much. In fact if I ask ChatGPT4o to:
Reduce the following post down to the points it actually makes, without repetition or commenting on the actual points being made:
It responds with a very short summary of the actual contents:
- DeepSeek is suspiciously cheap, which suggests hidden motives behind its pricing.
- It may contain backdoors, posing a security threat as it integrates into more systems.
- Rapid adoption of DeepSeek in critical sectors could lead to global security risks.
- The model can be seen as a Trojan horse, offering low cost and high capability while potentially compromising systems.
- The narrative of DeepSeek as a low-cost project is misleading, suggesting strategic agendas.
- Organizations should audit systems for DeepSeek’s presence, question its pricing, and limit its adoption.
So onto my response to this post.
Who said it?
First off let’s consider who posted it so we can properly attribute it and check for any biases.
This post was posted by the Chief Technology Officer of EmployPlan, Roch Mamenas.
EmployPlan appear to be a small business that provides a “Resource Management Platform”, for getting people with the right skills working on the right project and plugging gaps in skills with training. I’ve no idea how successful it is at this, but given Mr Mamenas has run it as CTO and a founder for 5 years, I expect it does alright. I do take a little offence to calling people “resources” but that’s pretty standard and less silly than saying “bums in seats”.
With their business being connecting people with businesses, there may be some
biases regarding overpowered AI taking jobs away from the people resources
EmployPlan wants to connect you with. After all, why hire people when you can
get a machine to do it? I am being facetious here, as I am a strong believer
that LLMs can’t replace actual skilled developers, but it could be a concern of
theirs, especially if LLMs replace a lot of Junior Developers (again something
I am dubious of).
There may also be concerns around businesses (even one-man shops) using cheaper LLMs themselves to screen candidates rather than use a product like EmployPlan.
From the banner on his LinkedIn profile, it seems that EmployPlan is using some kind of “AI-powered” technology in their product. What this means is anyone’s guess, AI is such a buzzword these days. It could mean something as simple as “a bunch of if statements” or using full-blown machine learning models. I’ll give the benefit of the doubt and assume actual AI is involved, in fact I’d assume more traditional AI is actually used alongside the current buzzword applications.
The mention of being vaguely “AI-powered” is a little red-flag to me that he may have some vested interests in the AI space. Although, it could equally mean he is invested in the space so has some righteous concerns around DeepSeek.
From his job history, Mamenas has worked in IT for quite some time, going from the service desk to bouncing between Software Engineer and Analyst for about 6 years before becoming a Chief Architect for 2 and half years and then finally focusing on startups (including EmployPlan) for 7 years. Point is, at a glance he doesn’t appear to be a poseur with a cursory understanding of technology.
The post could also be largely inflammatory to motivate discussion, raise his profile and drive some traffic towards his product. I think the younger generations call this “clout chasing”.
Addressing the points
With the potential biases and motivations of the author understood, let’s dig into the content of the actual post.
First off, there is a lack of evidence for any of the points Mamenas puts forward.
So far DeepSeek doesn’t appear to have any backdoors in it’s code, given that it is open-source we can verify this and fix any issues when they appear.
Obviously, we cannot guarantee that their live service application is secure. But the same can be said for any of the LLMs as a service products that exist, and any SaSS product in general. This is actually one of the ongoing issues related to the Tik-Tok ban in the US, a proposed solution being to sell the American arm of the business to a US based company to avoid potential issues with foreign powers getting their hands on sensitive data (of course that wouldn’t prevent US spying). There are other solutions/workaround to such risks, including enforcing no data egress from specific regions or data centres.
I agree with the point that “Wide Adoption = Global Risk”, but this point can be said of any software-as-a-service platform. Putting all your eggs in one basket is inherently risky. I also like the “80%” stat that comes seemingly out of nowhere. As we know 73.6% of all statistics are made up.
The fact DeepSeek have open-sourced their model means that, if you are suspicious of them, you can run it yourself on your own hardware with all the accompanying security in place, including air-gapping it if desired. Which eliminates the risk of “backdoors” or other leaks to the Chinese state. Obviously this can be cost prohibitive for smaller businesses.
There is risk when it comes to built-in biases within the model. Again, these exist in all other commercial models. For instance, DeepSeek might not like speaking ill of the Chinese Government or could have omitted training data related to June 4th 1989, in fact a commenter on the post points out an example of DeepSeek’s biased answers. Similarly, models like OpenAI have developed their own biases based on training material, whether by design or accident.
The point labeled “Trojan Horse Effect” is an interesting one, but it applies equally to all software adoption. Microsoft was very good at offering its products cheaply to schools and universities, which ultimately led to their wide adoption due to familiarity. Now they’re one of the largest movers in the desktop space, and have effectively captured a large amount of the market. I do think when it comes to LLMs we’re still in the bubble phase, OpenAI have the first-mover advantage but I expect the most widely adopted LLM service will be one that hasn’t been created yet.
The post keeps circling back on DeepSeek’s price point being a red-flag for concern, but is it really? As the landscape advances, LLMs are going to evolve to be better than their predecessors in certain dimensions. As an example, I noticed a big difference between the OpenAI models as they advanced, especially when it came to understanding source code, and other models are supposed to be even better than OpenAI. One of the dimensions that is beneficial for all LLMs to optimise for is obviously running costs, especially for service-based businesses.
The assumption that low cost implies malicious intent isn’t without merit, depending on what you consider “malicious”. As I mentioned previously, Microsoft used a similar tactic to penetrate markets and drive widespread adoption. Large businesses operate at economies of scale that allow them to undercut competitors. Start-ups with a lot of initial funding can equally undercut established businesses to bootstrap themselves to success, or disrupt an industry with novel approaches, or downright ignoring established legislation like ride-sharing apps did with taxi companies.
We also mustn’t forget the effect of “standing on the shoulders of giants”, advancements happen thanks to the work done previously. Normally, this is a slow incremental process, but occasionally we see jumps in human understanding and knowledge.
Sometimes it is wiser to use the more expensive option or to use a homegrown alternative to a foreign business. The debacles related to shortages of personal protective equipment and drugs taught us during the pandemic that global supply-chains are susceptible to disruption and that disruption can be even easier in the digital space.
Out of curiosity I asked DeekSeek’s uncle, ChatGPT4o, to give me some potential criticisms of this post:
Lack of Evidence: The post raises concerns about DeepSeek’s security implications, but it does not provide concrete evidence or examples of security breaches or backdoors attributed to DeepSeek.
Assumptive Reasoning: The post assumes that low cost necessarily implies malicious intent, which may not always be the case. There could be other reasons for the low pricing, such as market penetration strategies.
Fearmongering: By using terms like “Trojan horse” and suggesting large-scale compromise without backing data, the post may be seen as fearmongering, potentially causing unwarranted panic among users.
Generalization: The claim about global risk from widespread adoption does not consider differing degrees of security measures across industries or specific usage contexts.
Lack of Alternatives or Solutions: While the post suggests auditing and questioning DeepSeek’s pricing, it lacks a deeper exploration of alternatives for users who might find value in DeepSeek’s capabilities.
Xenophobia: Highlighting DeepSeek’s Chinese origins as inherently threatening might come across as xenophobic, attributing risk based purely on national origin rather than objective analysis of the technology.
Ignorance of Open Source or Unconventional Funding: The post does not consider the possibilities that DeepSeek could benefit from open source contributions or alternative funding models that allow for lower pricing.
Ensuring claims are substantiated, examining motivations thoroughly, and providing balanced viewpoints are critical for a fair assessment.
Admittedly, it gives a bit of milquetoast answer, not uncommon with LLMs.
ChatGPT raised an excellent point about the importance of considering usage contexts. Just because something becomes widespread doesn’t mean it allows total domination, it’s all context dependent. Obviously if you plug an LLM into your infrastructure, give it free rein to access all your data and systems, you’re asking for trouble. But with controls in place it’s a much smaller issue. If you never supply it with sensitive information then even if it leaks that information it’s a non-issue.
With all that said, I agree with the conclusion about auditing your systems, but I’d make a modification to it:
Audit your systems: Risk-assess what is embedded in your critical infrastructure?
Be it AWS, OpenAI, DeepSeek, Oracle SQL Server, or any other solution. You need to know where your risk points are, where you’ve committed to a one-way door decision (one not easily reversed), and what mitigations you have in place (if any). There’s a reason that we often uses interfaces or APIs in software development; to insulate ourselves from change.
After evaluating all these points you might find that the risks are minimal or that they outweigh the possible gains.
Comments