Since you ask, here are my thoughts https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence with numerous examples. To clarify your points :
- rely on open-source repository where the code is auditable, hopefully audited, and try offline
- see previous point
- LLMs don’t “analyze” anything, they just spit out human looking text
To clarify on the first point, as the other 2 unfold from there, such project would instantly lose credibility if they were to sneak in telemetry. Some FLOSS projects tried that in the past and it always led to uproars, reverts and often forks of the exact same codebase but without telemetry.
I didn’t downvote but I bet I understand people who did, as this comment does NOT address OP concern. They just add yet another alternative to verify without explaining how to actually do so, i.e. they make the problem worst rather than help, IMHO.