I always chuckle when I see companies trying to “ban” new technology. On one hand, I understand (it’s impossible to ensure the proper data security controls), but there are 100 new AI based applications popping up a day. Ban ChatGPT and people will just use some other tool (probably with even worse data security safeguards).
In my opinion, the only real way out of this is for companies to offer their own security-approved solution. This might take the form of an internally-hosted model and chat interface, or one pointing to Microsoft Azure’s OpenAI APIs (Azure having more enterprise friendly data security terms).
This article says Apple is working on their own LLM, and presumably they’re offering that for employees to test, but many other companies are simply closing their eyes and trying to pretend it doesn’t exist.
I don't think you should be advertising your services when you have such a tenuous grasp of the situation.
It has nothing to do with being fearful of new technology or because Apple doesn't have their own LLM to offer their employees. It's because ChatGPT allows sensitive data to be exfiltrated to a third party that in turn will make it available to the generic public.
It's purely about information security and risk management. And having actually worked at Apple they take this stuff extremely seriously.
They should take it seriously. If a lot of your product is mostly copy pastable why on earth would you feed even pieces of it to a third party? There's so many legal and logistical issues with it it isn't even funny. People using chatgpt now are mostly doing so because they believe it makes them efficient, kind of counter productive when your competitor can pay a penny to get your IP after months of efficient work. Or better yet, what's to stop OpenAI from eating your lunch? Nothing. Literally nothing. Even worse, at what point can OpenAI say they own your product because they developed it? In the land of business it's not about who is right, it's about who has the most money to pay to create the laws.
The OPs comment mentions the concern around data security multiple times. What gave you the impression they had a tenuous grasp on the situation? Feels like an unwarranted attack and that you only read what you wanted from the comment.
I think we’re saying the same thing - companies are concerned about the data and security risk. I still don’t think they’re going to be able to universally ban it, and I think doing so without an alternative in place will drive them to even less secure third party chat apps that still probably use the same API behind the scenes.
And I removed the last sentence from my comment, which didn’t mention a service, only that I was interested in chatting with folks in similar positions, precisely because this is a new space and many companies are scrambling to address it.
If you're going through the Apple network or using corporate equipment then of course they can universally ban it. They monitor all outbound traffic and audit what you're doing on your computer.
Do you actually think companies just give you root access and unrestricted internet and operate on a "we trust you" model ?
Of course not. My point is not about _how_ companies ban it, or how their IT security policies work, but rather that I think they need to offer an approved, internal alternative. Employees are clearly clamoring to use this type of technology and increasingly see it as necessary for their work. Trying to ban something without an alternative in place (or with draconian "we'll fire you if you try to use it" policies) is where you start to run into trouble.
I thinknyoure missing the way this type of ban works. When you tell the emomplyees not to use a banned service it becomes an employee discipline issue. If they violate it its likely a problem for the company but it becomes a serious problem for that employee.
> Ban ChatGPT and people will just use some other tool
Depends on the policy. The actual rule may be "no external AI assisted tools or you're fired" rather then anything ChatGPT specific. And I fully expect Apple will be able to tell from their network monitoring if you broke the rule.
True, but for now I think it’s good to be slow when you’re a big corp. Esp when it comes to code, it’s probably wise to wait out the early legal battles first.
In my opinion, the only real way out of this is for companies to offer their own security-approved solution. This might take the form of an internally-hosted model and chat interface, or one pointing to Microsoft Azure’s OpenAI APIs (Azure having more enterprise friendly data security terms).
This article says Apple is working on their own LLM, and presumably they’re offering that for employees to test, but many other companies are simply closing their eyes and trying to pretend it doesn’t exist.