By Leo Almazora, in InvestmentNews, featuring Shane Cummings, CFP®, AIF®, Wealth Advisor & Director of Technology/Cybersecurity
To recap: Anthropic, which has sought to distinguish itself from competitors with a reputation for safety, accidentally exposed the underlying instructions it uses to steer its Claude Code app last Tuesday. By Wednesday, representatives for the company had issued a copyright takedown request seeking to remove thousands of copies and adaptations of the raw instructions that had proliferated on GitHub, a popular code-sharing platform among developers. That request was later relaxed to cover just 96 copies and adaptations.
A spokesperson for the company downplayed the impact of the incident, telling the Wall Street Journal that its mistaken reveal of “some internal source code” didn’t expose any customer information or data.
“This was a release packaging issue caused by human error, not a security breach,” the representative said, adding that the company was “rolling out measures to prevent this from happening again.”
The incident raises fresh questions about the credibility of Anthropic, which is set for a potential public offering later this year. The company also extended its reach into the wealth space this past February by rolling out new dedicated plug-ins while announcing strategic partnerships with LPL and Orion to deploy the technology.
‘Growing faster than responsibility dictates’
Eric Franklin, managing principal, co-founder, and advisor at Prospero Wealth, sees the incident as a teaching moment for RIA firms and advisors.
“The fact that Anthropic runs one of the most ‘agentic’ of the AI frameworks, in Claude Code, should give all advisors and firms a wake-up call about the responsibility we have to our clients,” Franklin told InvestmentNews.
Signing a contract with an AI provider, he stresses, demands more than a signature and a subscription fee. Firms need to understand the sensitivity level of their data and build processes that sanitize personally identifiable information before it ever reaches an AI system.
Shane Cummings, wealth advisor and director of technology and cybersecurity at Halbert Hargrove, echoes the point at the contract level. He says firms using Anthropic’s products should immediately review the data protections baked into their agreements, paying particular attention to whether their subscription includes a zero data retention rider.
“If Claude stores any identifiable client information, that could be a liability for an advisory firm in the event of a future breach,” he said.
Both advisors point to the same underlying dynamic driving these risks. “AI companies are growing faster than responsibility dictates,” says Franklin, who sees code leaks and security incidents as virtually inevitable as long as the AI gold rush continues.
Cummings agrees, arguing that major hyperscalers are currently competing on speed rather than security.
“I think there is a definite fear in the industry that anyone not keeping pace with the pack on AI adoption is falling behind,” Cummings said, “but that does not mean we should lower our guard and play fast and loose with data security.”
Wealth tech experts’ advice: Ask providers tough questions
For advisors not using Claude Code directly, the immediate operational risk from Anthropic’s code leak may be limited, but it still cuts much deeper than a single product’s debug file.
“It is a cautionary tale,” says John O’Connell, founder and CEO of Oasis Group. “If this were to happen with Claude – which one would argue has a lot of safeguards – then this could happen to any vendor using artificial intelligence in their product that is not taking some really specific precautions.”
Drawing a distinction between the platforms firms build on and the dependency layer underneath them – which could include third-party AI tools, like Claude Code, that developers weave into their own products – O’Connell says many advisors tend to underestimate the vendor-layer risk.
“You cannot just turn it on and off,” he warns, explaining that once a debug file or code bundle is exposed to a public registry, the exposure is effectively permanent for anyone who accessed it during that window. In other words, if a firm’s technology vendor has announced an AI partnership, advisors shouldn’t be shy to ask questions about source-code hygiene, monitoring practices, and what specific precautions are being taken to prevent the same nightmare scenario.
While no customer information appears to have been compromised following the incident at Anthropic, Mike Wilson, co-founder and CEO of Hamachi.ai – which holds itself out as a regulatory-first, AI-powered wealth intelligence platform for investment advisors – argued it should naturally raise a broader conversation about data security.
He cited an IBM study published last December found that 39% of financial services firms admitted to sending confidential information to AI tools.
“That means probably 100% of firms actually did it,” Wilson says. The same study found AI implicated in 20% of all data breaches – a figure he expects to grow.
“AI is going to be revolutionary for wealth management, 100%,” he says. “But this industry being highly regulated and [given] how sensitive the information is, we need to lead with these guardrails.”
Apart from prevention, O’Connell argues the most urgent question is about incident response. While Reg BI imposes a 48-hour notification window for data breaches, he says most vendor license agreements explicitly disclaim accountability for breaches, leaving the RIA to “hold the compliance bag.”
“If I’m carrying all the risk [as an advisor], I 100% need to understand what the vendor is doing to stay on top of this, and what their timeframe is to notify me,” he says.
