I came from AI (heuristic search) in research and from a highly sensitive business area with strict regulations (Financial industry, somehow Fintech predecessors). In developing business software, Smalltalk played an important role for me. That period ended about 20 years ago, my activities changed recently to consult govermental services.
Given my personal bias, as long as there is no convincing concept to avoid unwanted transfer of information, any remote AI support is a potential risk. Insofar the absence of AI support up to now was a benefit - to pinpoint, even access to online help or to any public repository was regularily denied.
Besides of that, LLMs are based on propabilities (of matching solutions to a problem). Answers from there come from a distribution of gathered solutions (which came from training or previous questions) without any established (human) quality control.
If coding is seen similar as writing text, having something like high-lighting or correcting spelling orders, this is ok.
However, if coding has to provide solutions to problems in the reality (beginning in the design, architecture and higher levels), any inclusion of remote AI support implies to establish professional quality control and acceptance of information transfer to the public, that means loss of data confidentiality.
Everybody has to decide the boundary where this implied information transfer has to stop. Personally, with my limited experience and confrontations with AI generated codings, I stay sceptical to any public cloud based AI integration in software building tools.