Concerns about AI security
Hello! Long time Firefox user here, grew up with a father working in IT security. He's mentioned this a few times before, but now we've both really noticed the push towards using generative AI in roles in IT where it's really unsuitable, and honestly quite a risk to security and privacy, not to mention quite a waste of time having to fact check things properly because I cannot even be sure that the first result in a Google search for "how far away is the moon" will be correct. We're quite concerned that Firefox is following suit, especially with the new CEO evidently not understanding some of the fundamentals of why this browser has such a dedicated user base. Just wondering if saving the money by not having to hire competent, trained professionals who can provide the human touch needed to run a browser is truly worth eroding Firefox's reputation of being the genuinely good alternative to Chrome? How long before the AI stops being optional? How much of the code is going to end up written by some language learning algorithm and passed off as safe? If this push is to continue, what alternatives does anyone in the community suggest? Thank you for any responses from both myself and my father :-)
All Replies (2)
Librewolf is what I keep seeing suggested.
Also, it seems every person who knows a little about how these things really work keep saying 'do not use 'AI' for this because it's not safe and everything I'm reading about the security of these integrations is that there is none.
I don't think there is a push to have LLMs code Firefox. The latest statement is:
https://blog.mozilla.org/en/mozilla/leadership/mozillas-next-chapter-anthony-enzor-demeo-new-ceo/
To help shape Firefox going forward, I suggest posting your concerns and suggestions in the thread over on Mozilla's product suggestion site:
That has a better chance of reaching decision makers than a support request here.