unpopular thought perhaps, but with this many companies/teams mandating MFA (especially to technical people, who should already know how to create secure passwords, not use them on more than one site, not spread them around, etc):
The pressure of all of these MFA inputs, especially for products that expire them even on a trusted device/browser, is eventually going to push people into the arms of convenient "password managers".
This will effectively nullify the 'something you have' in MFA because it'll all be available on your one single device again.
Even worse, it'll present multiple high-value targets now, from the centralized server/sync side (ie lastpass) down to individual devices.
Put another way: if you're storing the passwords in the same place as the MFA secrets, then it's not actually MFA anymore.
It's not that PyPI is wrong to do this, it's that the weight of everyone mandating MFA will eventually either push people away or force them to work around draconian or onerous security requirements.
> if you're storing the passwords in the same place as the MFA secrets, then it's not actually MFA anymore.
I see this sentiment a lot and IMO it over-simplifies the security model pretty drastically. It's unlikely that a 1P vault is breached or leaked. For most people, the most likely threat vector for a random internet account is:
1. they re-use a password across sites
2. one site gets breached and that password is included in a hash dump (probably associated with their email)
3. one the hashes are broken, an attacker can try that password on another site
In this situation, MFA stored in their password manager is still MFA because the TOTP secret wasn't leaked with the password (or it was and then it doesn't matter where you store it on your end).
The only case in which storing MFA secrets with your passwords becomes an issue is if other people have access to your laptop (and password manager). Then you'd probably want passwords on the laptop and MFA on your phone (or something else kept on your person).
Nobody questions leaving a yubikey in a laptop (which is also full of passwords). Even if doing so means anyone with your laptop can use your yubikey, it narrows your attack vector from "anyone who got your password" to "anyone with physical access to your laptop", which is a great reduction in scope (for most people).
That's not the only attack vector that is mitigated by TOTP. If a hacker is somehow able to intercept your password and your TOTP for a short period of time (e.g. because you're in an unsecured network at Starbucks or something), they can do exactly nothing with that knowledge afterwards. They won't be able to perform a replay attack, and they won't be able to guess your next TOTP, or any that comes after. The "only" thing they can do is attack you right at the moment where you're using the insecure network (e.g. with some sort of MITM attack).
An unsecured network doesn’t help an attacker gain your TOTP and password unless you’re using a website without HTTPS or that otherwise messes up by putting the credentials as query parameters.
The most an attacker might be able to view is the addresses of the sites you are connecting to.
> The most an attacker might be able to view is the addresses of the sites you are connecting to.
With TLS 1.2 or earlier the attacker can almost certainly discern the real DNS name of the site you're connecting to, in TLS 1.3 this is merely likely (and ECH might some day largely eliminate this risk) but not certain depending on how you connect.
In practice your client hates wasting bandwidth and so precise size measurements are also surprisingly effective. If six people who I'm snooping watch movies from the Fast & Furious franchise on a streaming service and one watches "The Imitation Game" I can tell them apart with more or less 100% reliability. If they all read Wikipedia, six looking at stuff about dinosaurs and one reading about the Senate Intelligence Committee report on CIA torture, I can tell again.
Clients (e.g. your web browser) could do more to hamper this, but they do almost nothing. For example, suppose I'm sending an encrypted HTTP request with some data in it, and it'll fit easily into 4 Ethernet packets. I could pad that last packet so it's always full, and have the decryption step remove that padding for free but clients don't bother, so a bad guy can measure how long my data is to within maybe 16 bytes.
Yes, you're correct. An unsecured network is not enough. A honeypot wifi that the attacker controls would work, though, because they could just perform a MITM attack and thus decrypt your TLS traffic.
They will be able to view the public certificate but will not be able to sign or decrypt anything because they do not have the corresponding private key, which is never sent over the wire.
HTTPS protects against MITM attacks.
When the owner of the domain originally obtained a certificate, the obtained signed attestation from a trusted provider that they were able to field requests to that domain. Those requests can come from anywhere and are not possible to MITM. This attestation pertains to a public key/private key pair.
> The only case in which storing MFA secrets with your passwords becomes an issue is if other people have access to your laptop (and password manager). Then you'd probably want passwords on the laptop and MFA on your phone (or something else kept on your person).
The problem is that a lot of the major package managers - at least nodejs npm/yarn, PHP composer, Apache Maven, Gradle - allow code execution of whatever packages were specified as part of the installation or build process.
That means there are an equal lot of very juicy targets for a takeover - just look at the most popular build plugins... and a few minutes are enough to deploy a keepass and browser credential store stealer to a ton of people. Getting local code execution isn't that difficult if you manage to spray your payload over a huge enough area.
Getting access to a target and their MFA device however is vastly more complicated.
They use 2fa to login onto the website and make you create a token. After that you just save the token within your project and have a script to upload new versions.
When someone stores their MFA credential in a password manager, that only means that in the worst case, they are as insecure as someone with no MFA – right?
This doesn't seem like a big problem to me for two reasons:
1. if they're using a password manager, they are likely to be using a better password
2. it's much more likely that someone has intercepted a single static password for a single website than, say, your 1Password vault password AND username AND secret key
> When someone stores their MFA credential in a password manager, that only means that in the worst case, they are as insecure as someone with no MFA – right?
Even if this was right (which I don't think it is under the threat model described by the OP: they claim, rightpy or wrongly, that the attack surface somehow increases when everybody adopts MFA), consider the protocol where you have to enter your password and cut off a lock of hair to authenticate. In the worst case, this is as insecure as someone who just enters their password without any hair cutting, but doesn't mean that adopting it would not be a big problem, or even a good thing overall (at the very least it would massively inconvenience bald people; there are parallels, it's not like MFA does not inconvenience certain people).
IMHO, there's a bit of confusion around terminology here. People aren't really mandating "MFA" unless they're requiring Yubikeys or something. But they are requiring TOTP, and adding TOTP to your login flow does mitigate certain kinds of attacks (though, of course not others).
So maybe we should just be more honest about what we call things. "True" MFA is too inconvenient for most people - even banking apps are not only happy to let you log in from exactly the same device that they will send verification codes to, they usually make more secure workflows more of a pain too. But we're still gaining some additional security by having one-time codes that can't as easily be stolen without having access to the device.
AFAIK they are still in common use in high security environments. I have one for work, but we are supposed to eventually migrate to phone-based token generators.
Devices like this are basically TOTP reified. I mean, they aren't literally the TOTP protocol the technology is different, but it's a secret value (baked into the device) which is combined with a clock and a decent hash to produce predictable values over time. A kilt is a skirt, my sister isn't wearing a kilt but if you haven't seen any other skirts then "It's basically a kilt" is a pretty fair description.
RSA is embarrassing because they kept the fucking secret values. As a result it was strictly worse than just getting whatever cheap knock off you can purchase. I believe their rationale was if they keep these values when a customer inevitably goes "Oh, oops, we lost the values" instead of "Too bad, now you own useless bricks, buy more" you can "Help" them by providing the secret values again. But that ought to be the very stupidest idea from a security company if only there weren't so many other embarrassing stories.
In 2011 they suffered "an extremely sophisticated cyber attack" aka basic phishing, and bad guys are assumed to have stolen the complete database. So, that's not great.
In Germany, you can still get such devices (for people who don't own smartphones, presumably), but it seems a bit like banks are discouraging them and in any case, they make you pay for them (a one-time cost only, though). So most people will use the smartphone app.
The old boxes won't work anymore though. The reason is that with the EU directive PSD2, it became mandated that the codes be tied to the transaction in some way. So the newer devices need some way of reading the transaction info before generating the code, e.g. a QR reader.
While I can see the merit in discussing the issue of where to store the second factor, what I see much less often discussed is the story on the "tooling" side.
For example, on PyPI, to upload stuff, you need to generate a token. Now you have effectively a single factor, which requires as much care as a regular password.
> Put another way: if you're storing the passwords in the same place as the MFA secrets, then it's not actually MFA anymore.
Sure, but PyPI was also giving away yubikeys. Also, you're assuming that the compromise involves fully owning a device instead of the much more common case of phishing/password reuse.
It actually is hard to carry a Yubikey and, more importantly, use a Yubikey. Some employers don't allow personal USB-like devices in the building nor plugging them into company owned computers.
I say this as someone that has several that are used at home but needs to use personal TOTP codes at work.
Also outside of Hollywood movies there's not a great intersection between people who are great at this sort of hands on crime (e.g. robbery, pick pocketing) and the high level strategy needed to want a specific person's MFA token. Tom Cruise would play a character who does that (and rides a motorcycle, obviously) but in the real world it's not a thing.
Unlike car keys the tokens don't even know what they're for. You can walk around a car park with keys and match the car, these days it'll even remotely blip the lights - but if you have some random guy's Yubico Security Key, you don't even know if he uses Facebook, Google, PyPI, or what, let alone what the account's username/ email might be. Good luck.
They reset my password and then changed the e-mail.
The username remains the same and it is "arnon".
I tried re-registering now to check your claim but it says the username is under use and I can't restore the password for it since they changed the e-mail to one of theirs.
The last communication I got from PyPi was from Ee Durbin in 2022 saying:
> Given this, it appears that someone from <redacted> utilized the @<redacted>.com email address associated with the account to take it over and obtain access to the <redacted> libraries that the arnon User owned.
> We are discussing next steps internally.
> -Ee Durbin
> Director of Infrastructure
> Python Software Foundation
I've asked a couple of times for status updates as recently as this July and haven't heard back.
Can you change it though? When I first added my tokens I wrote their brand name to identify them, but then I realised I might buy more from that brand, so, I changed to writing a colour, reasoning that when I buy new ones even if they've got an identical colour I can add a blob of nail polish or something, so "The Red One" is clearly this one, not that one. I don't use PyPI but I found it convenient to go back and fix places where I'd written like "Yubico". It's not a big thing, but it's also hopefully not difficult to implement.
Too bad that PyPI (and pip) effectively killed PGP signatures under control of the developers (therefore truly end to end) even with the simple TOFU model, and without providing an alternative.
> Any chance of signed builds returning? It's bizarre to me that we would move _away_ from signed builds.
PyPI never supported "signed builds" in the first place. What it had was vestigial support for attaching PGP signatures to distributions; without a key or identity distribution mechanism, these signatures were virtually useless (and all public evidence indicates that they were, consequently, virtually unused).
Note that attached signatures alone don't prevent dishonesty on PyPI's part: without identity pinning, a dishonest PyPI could replace a correctly signed distribution Foo with a correctly signed (and easily exploitable) distribution Bar during a user's retrieval. Every signature needs to be bound to both the distribution's content and its distribution name by some stable discoverable identity.
I hate these 2FA mandates. I don't use PyPI, but I do use GitHub, which has also announced a 2FA mandate.
I use my GitHub account to make bug reports, small pull requests, and silly personal projects. It is not that important. I want to sacrifice security for convenience on it, and that should be my choice.
I also do not agree with the argument this secures the supply chain because:
1. It ignores supply-chain attacks from people who already have repository access.
2. Most big companies (ie. Google) are probably already using 2FA.
3. And if people are automatically pulling code from random people/groups without checking it... maybe that's what actually needs to be banned.
Unfortunately even if you did not pull code from random groups, and instead curated your GitHub dependencies, you can still be caught by surprise when one person has a re-used password and no 2FA because “ugh it’s so inconvenient”.
Nothing will fully secure the supply chain, but this certainly reduces risk and given the impact software has in today’s world it’s important.
Most importantly, as a normal person, I'm more inclined to go through security hoops with internet banking and payments, and much less so for every single website that exists.
Hard disagree here, supply chain attacks are big business, it matters a lot more than a few thousand bucks in a savings account which can be easily reversed if stolen by crooks. PyPi isn't "every single website", it's full of modules powering a lot of the internet and other critical infrastructure.
I have a hardware key for the 2FA on my meagre open source libraries, it takes 10 seconds to pull it out of my pocket and use it. Why is that a bad thing if it's enforced? It seems more like you have a UX problem here, there's solid open source TOTP software that come with browser extensions and are one click to use. SMS only can be a pain but many companies are moving away from that, albeit slowly.
I also ave a hardware key, I got it for free last year.
It doesn't take 10 seconds. It takes remembering to keep it with me when I travel.
Also with 2FA the risk of being permanently locked out of my account increases A LOT.
With a bank or similar I can show up to their office, show my id and reset all access. With websites there is NOBODY responding. I've tried taking over an abandoned project on pypi for which I've done several contributions before the owner disappeared. Never got any response.
So losing the keys means that I have to fork my own project :D
Will there be a way to determine is a package has all owners 2FA enrolled? Maybe even a public key that is linked to the account? It would be good to have an API queryable mechanism linking identity with signing.
By the end of 2023, all users on PyPI will be required to enable some form of 2FA to perform packaging operations. So the distinction between has 2FA and not will become moot.
I see the list of "management actions" does not explicitly include project or account deletion (after 2FA imposed), anyone know if those will be included?
Sorry, now I'm confused. Suppose it is 2nd Jan 2024, 2FA is now required, I have an account and a sole-owned project, I don't have 2FA.
From above, I cannot delete the project because "Project deletion would fall under 'management'" and management requires 2FA, Or from above "you can still delete the project" so I can delete the project without 2FA?
From reading around, one cannot delete an account if it has sole-owner projects (right?), so in the former case, one could not delete one's account without setting up 2FA to delete the project first, contrary to "you can elect to remove your account at any time"?
Why is the 2fa rollout going at such a glacial pace ?
In July last year it was announced that the top 1% projects contributors had to use 2fa.
It it because of pushback from the developpers, or maybe because it is not as easy it it seems ?
I'm just curious, I am in now way involved in this.
2FA itself has been deployed and available on PyPI for years, and has seen a decent amount of adoption. That part hasn't been glacial :-)
Mandatory 2FA, on the other hand, is a little thorny: the Python packaging ecosystem has a lot of very popular, very stable packages that receive relatively few updates, meaning that it takes a long time to onboard those maintainers (without risking locking them out of their accounts or their abilities to do rapid security releases).
Now that GitHub is mandating 2FA, however, the argument for slow-walking it becomes much weaker: the overwhelming majority of maintainers will need to enable 2FA anyways to make changes to their codebases, so PyPI can effectively "hitch" onto that wave and do a mandate at the same time.
TL;DR: Moving hundreds of thousands of users to a mandatory 2FA scheme is relatively disruptive; the circumstances have aligned such that doing so now is minimally disruptive.
Interesting,
I kinda understand since I work for a big org where everything is very slow because of bureaucracy and fear of changes but here the big projects were already forced to use 2fa.
It would seem logical to force the contributors to use good security practices right from the start. I would have probably started with those.
Anyway, I don't want to complain. I believe its a good step towards securing the software supply chain.
> It would seem logical to force the contributors to use good security practices right from the start. I would have probably started with those.
I agree, but that's the benefit of hindsight :-)
PyPI is simultaneously one of the oldest and most active language packaging ecosystems out there; a lot of of the things we treat as "table stakes" in terms of good security practices weren't even invented when it was first released.
The consequence of all of this is that there's a lot of ossification, and things can't be changed suddenly without (reasonably!) upsetting a lot of people who are invaluable to the community. It'd be great in terms of security if we could just force it, but that wouldn't be fair to them, to their historical expectations, etc.
Edit: I should say: I'm not a maintainer of PyPI, just someone who has contributed to it. My opinions aren't representative.
> The consequence of all of this is that there's a lot of ossification, and things can't be changed suddenly without (reasonably!) upsetting a lot of people who are invaluable to the community. It'd be great in terms of security if we could just force it, but that wouldn't be fair to them, to their historical expectations, etc.
I can't tell if you're being very overly generous to folks, or if there's something I'm really not considering. Given that I've been using a Yubikey, password manager, ssh-only auth, etc for ... idk, nearly a decade?
Did it take a whole hour to learn + setup? Yes. Do I think that over time I've been more secure, and had to deal with less headaches from the repeated LastPass breaches, password leaks, compromises, etc? Oooh absolutely.
---
Sorry Python ecosystem! Sorry a package was compromised by a careless dev. Pypi? Oh what about it? Why didn't it require basic security mechanisms to upload packages downloaded by literally tens of millions of users? Oh, we couldn't inconvenience lazy devs, come on now.
I just can't with people. These things matter. Taking hard stances and making people uncomfortable sometimes IS NECESSARY.
And to be clear, I'm not trying to come for your woodruffw (or the pypi team, god knows I've seen how HN acts with forced 2FA), I'm expressing a generous frustration that there doesn't seem to be a firewall where "general dev laziness" is overridden by idk, any sense of the commons, or any basic understanding that valuable assets WILL be attacked, passwords WILL be compromised and that some "root of trust" with something I physically can hold is pretty much required these days.
LOL HN really does not like hearing inconvenient truths or truths that point out their blind/lazy spots.
Have you considered that perhaps you are the lazy one?
You don't want to inspect the source code yourself for security holes, you don't want to pay someone to do it, and you don't want to establish a direct trust relationship (personal or legal) with the original developers.
Instead, you want to trust automation and externalize blame.
And you call others lazy?
> ... any sense of the commons
If you have any sense of the commons beyond past Hardin's simplistic and historically flawed argument advocating mandatory population control, then surely you can understand how PyPI admins are trying to balance the traditional commons use rights based on cooperation and responsibility with the needs of lazy people like you, while hopefully avoiding any devastating effects akin to how English land enclosure deprived commoners of their rights of access and privilege.
No, I'm talking about the complete shit show that is python packaging and the fact that there is any hand wringing over this (2FA) being "hard" to force on devs.
There's nothing hard about it.
This doesn't have anything to do with auditing source, that's such a creative cop out, subject change, whataboutism.
No, actually, I'm not a giant corp, I can't afford to hire teams to review every commit. Especially across the python ecosystem, it being what it is. And that's assuming it's even easy to find the damn source, or go from papi back to the actual source commit. Which, it often isn't!
Oh and supposedly I have to do this because devs that publish packages with millions of users are too lazy to have some actual security around their release process?
No. Sorry, it's not unreasonable to review a project, skim the source, and determine there's software engineering going on. However, without 2FA, none of that really matters, does it? Oh! And, this whole scenario is moot given that most people aren't pinning with hashes anyway, so your little made-up scenario and words you've effectively put in my mouth really doesn't make the point you think it does, anyway! In fact, thanks for another great point to add to my initial list!
> you don't want to establish a direct trust relationship (personal or legal) with the original developers.
Do you actually understand what this thread is even about? What in the hell good does that do me when their laptop gets swiped at a conference and their latest package gets replaced?
> while hopefully avoiding any devastating effects akin to how English land enclosure deprived commoners of their rights of access and privilege.
Wow, I can't believe I wasted my time reading you post, let alone replying to any of it. I love a dramatic flair but that's in poor taste.
Because there's a long history of people using cooperation and responsibility in PyPI, and don't have your pressing need to change. Not because they are lazy, but because they don't care about your personal needs.
Some only upgrade every few years. For me, it seems like the PyPI upload process changes faster than my release cycle.
Somehow you you think that long tail of distributors - not "packages with millions of users" because they had to switch to 2FA a couple years ago, but packages with perhaps 50 users - will jump to 2FA within a couple of years?
Calling them lazy certainly doesn't help encourage transition.
> I'm not a giant corp
So you entered the Python ecosystem without knowing fully how it works (understandable), didn't find that it meets your requirements (understandable) and decided to place the blame squarely on other people. By calling them out as "lazy."
Thing is, "lazy" can be turned around on you too.
Sounds like you were too lazy to figure out the problems with Python before you got stuck with it. You should have researched it first - then you could have gone to some other language.
Of course, the real issue is that you learned things over time, and it's hard to switch at this point.
Just like PyPI.
> too lazy to have some actual security around their release process
If they haven't updated in 4 years, what's the difference to you? You really think everyone is releasing all their packages all the time, and as on the ball as you are?
> back to the actual source commit. Which, it often isn't!
What arrogant presumption! It's free software. You paid nothing, so you're already getting more than you paid for. While at the same time you are making money from their work.
Your attitude, repeated over and over, is causing open source project maintainers to burn-out.
You want that? You pay for it, or pay someone else to do it for you, or do it yourself.
I don't develop open source now because of the attitude of people like you.
> but that's in poor taste
To strong contrary. PyPI devs must balance between the needs of corporate and professional users like you, and student and hobbyist programmers who don't care about "real security" but have something they want to publish, and only touch every few years.
Make the barrier too high, and they drop out, just like the enclosure laws to the actually-well-managed commons in England.
Sure, perhaps you want a market floor meant only for corporate and professional accounts. But I know that's not what the PyPI devs want because I've heard them talk about it.
Make the barrier too high, and people will migrate to alternate providers. It's easy to set up a 'simple' PyPI server - I have a static one for my software releases since I'm tired of dealing with PyPI changes when I just want to update via rsync/ssh.
But you know what? Pip and other programs do a really bad job of isolating packages between multiple servers. I can see pip checking my server for "pip" updates, and I can see someone tried to install "numpy" from my server. I could easily have given them a fake one.
The PyPI devs can't easily change how that protocol works, plus namespace conflicts become even worse with multiple providers, so they really do not want to encourage a migration to other systems.
Move to another programming language with a distribution security model that meets your high standards. Don't stay with Python - you'll be hating it for your entire career.
The pressure of all of these MFA inputs, especially for products that expire them even on a trusted device/browser, is eventually going to push people into the arms of convenient "password managers".
This will effectively nullify the 'something you have' in MFA because it'll all be available on your one single device again.
Even worse, it'll present multiple high-value targets now, from the centralized server/sync side (ie lastpass) down to individual devices.
Put another way: if you're storing the passwords in the same place as the MFA secrets, then it's not actually MFA anymore.
It's not that PyPI is wrong to do this, it's that the weight of everyone mandating MFA will eventually either push people away or force them to work around draconian or onerous security requirements.