I had to get a coworker set up under WSL because of a goofup with IT and instead of shipping a Mac this coworker ended up with a Windows machine and all of our tooling was Unix based.
I had had them setup with WSL as a way of moving forward, here's a few things I noticed:
- different home directories, the different distros are accessed as network shares. It can get confusing where "Home" is unless you explicitly stumble a few times and verify always where files are.
- Issues with CR/LF. You need to be careful using native windows tooling and having saved files in Windows line endings vs Unix. For non-savvy users the content looks correct but on Unix the tooling will fail in weird ways.
- Visual Studio Code integration was actually excellent, my coworker ended up using VS Code to do Filesystem operations more so than using Windows Explorer.
- The new Windows Terminal is fantastic, hopefully its bundled with Win 11.
I'm really grateful for the technical capabilities that WSL brings out. I hope the dev teams are focused on fixing the subtle UX issues that trip up users.
If you think that's fun, just wait until you encounter the clowns who copy-paste random shit directly out of Word docs containing C1 control codes mascarading as "smart quotes" into your UTF-8 source.
The UTF-8 BOM still has some uses, unfortunately. I had to convince Excel to parse a CSV file as UTF-8 and inserting a BOM at the beginning of the file was the easiest solution.
> a goofup with IT and instead of shipping a Mac this coworker ended up with a Windows machine
That kinda implies this isn't a small shop, so "breaking" a windows machine IT provisioned to put Linux onto it might be out of the question, both in terms of requisition, access to the UEFI, or even network configuration.
Or said coworker is more familiar with Mac/Windows than <Insert Linux Distro and particular window manager here>, so this is best of a bad situation.
You can secure boot Linux with UEFI. For a laugh I do it on my Arch lappie but my Home Assistant Debians do it out of the box and so do my Ubuntu LTS servers. I gather that SuSE does too and probably many others.
You can domain join via Winbind or SSSD. Exchange via Evolution or Kmail and other work arounds exist. TPM 2.0 has been supported for quite a while so all good for encryption and other attestation tasks. We have rather a lot of host based firewalls (sore point 8) but it looks like the latest effort might stick for a while longer than the past ones.
...
Oh, you use that ... vendor ... SSL VPN. OK, can't do that and that's generally where it falls apart. We do have several.
I can and do operate solely on Linux but I have the benefit of owning the company. Some of my customers insist on their gear/setup for their environment and that's fine: they get to buy it too.
Most business IT over a few hundred employees looks nothing like this. They’re going to be managed laptops with locked UEFI and, if you got past that, you or your manager would get a call from IT asking you why it isn’t showing up in the management system anymore and to ship it back for a restore.
At a larger enterprise organization (that isn’t tech/FAANG) it could be considered a fireable offense.
It's extremely frustrating that IT policy in large corporations is the same for say clerical stuff and software engineering. Down to buying the same hardware for ease of management when it makes no sense because of the use cases being so different.
You'll be stopped long before SSL VPN. Right about the "you're running an unsupported/unauthorized OS on your company-owned laptop, send it back so we can restore to Windows"
And then you send the laptop and your two week notice. People seem to forget or not take advantage of the employees' market that's been going on for several years in this field.
I think they meant bios/uefi is locked out by IT. That is case for me. Also TPM is only available on some distros. You can use linux as long as it is Ubuntu or RedHat and maybe Arch if you are lucky.
From the original comment, it's likely an IT department, meaning a large org, meaning slow changing policies in place about how the machines are setup, often in order to meet even slower changed compliance laws.
Your seven different "vendors" solution to bodge domains/outlook wouldn't be supported by IT, which is a problem when then machine breaks or an audit happens...
I think one of the main problems is that when you install git for windows, the default setting is to convert LF to CRLF when checking out code. This can cause a bunch of issues if you later try to work with those files within WSL (especially when doing any git-related stuff). I don't really understand why that's the default, or even why it's presented as an option during installation, it's been ages since I actually had issues reading/editing Unix-style files in Windows.
The biggest gotcha on Mac is using git on the default case-insensitive volume. I always create a separate case-sensitive APFS volume for all my git repos to avoid subtle mismatches between the filename git knows about and the filename macOS knows.
The new notepad version was released to non-insiders in Windows 10 1809, so it's still been around for almost 3 years now. I do understand that git wants to maintain the best compatibility, but at this point I would argue that converting to CRLF on checkout is causing more issues than it solves.
Would be weird if platfrom compatibility is measured by making sure once single program works, a program which most of the people using git probably don't even use
It's usually the other way around these days - most Windows apps can handle LF just fine, but Unix apps often cannot handle CR LF. So if you use Windows to generate a text file, things break when you try to use it from WSL.
I used Linux since 2000. While I made countless tries to make it usable as a desktop, I had been forced to recognize long time ago that for me it isn't a good enough experience.
It alway needed lots of time to configure things to make it half decent, the configuration would broke from time to time. Some software which I want to use either runs poorly using Wine, either can't be run at all.
I am much more comfortable using Windows as a desktop and WSL has eliminated the need to dual boot or use a Linux VM.
I've been using xfce for about 10 years, and recently I've stopped customising it. I don't care about the theme, backgrounds etc. All I do is configured a single panel on the top of the screen and install plank. I have a few keyboard shortcuts for window tiling, and that's it. I haven't used Windows in a long time, so I'll be honest and say "I don't know what I'm missing", but I'm willing to bet that it ain't much, especially in return for what I get out of Linux. Maybe I should give it a go?
I also used system for a number of activities that are unavailable in Linux and rely on WSL to give me a reasonable facsimile to a nix box.
I loved the new WSL2 upgrades but rolled back because I simply couldn't do the rest of my work as quickly in Windows 11. Forcing the grouping of apps on the Taskbar was major productivity hit for me, as was the simplified context menu in Explorer.
Also, for more privacy. Microsoft doesn't have to constantly see what you are doing. They got GitHub and O365 with OneDrive integrated in all the big companies, way too much power already..
I've tried. I've been on fedora for years but things just break. The sound breaks, the video breaks, with the latest release VScode and Teams broke. Also I've two graphics cards and one has been unusable for 6 months.
If you don’t like Fedora, perhaps try a different distro. Fedora’s commitment to free and open software has the downside of making proprietary hardware less reliable. From my own experience I would recommend Ubuntu or Manjaro. Those break rarely, if ever.
It is hard to characterize linux distros in broad terms because they all take such different approaches — two could be as similar as Windows 10 vs 8, or as different as Windows vs Mac!
I agree. I use seperate computers for each operating system I use; which is currently Windows 10 and FreeBSD. I don't use MacOS much any more and Linux seemed to be edging closer towards Windows, so FreeBSD brought me back into a more traditional Unix world with TWM and command lines. I enjoy it.
I am waiting for two things; Better DE UX and graphics card sharing for VMs in consumer cards before I move full time to Linux.
Gnome 41 looks compelling and by the time consumer cards can be shared with VMs (for gaming, VR and that sort of thing), it might be as good as MacOS's desktop environment.
I can't wait to ditch Windows full time. It's such a confused mess of an OS.
Running it in a VM and then connecting to that VM via VNC and going full screen was a revelation to me. Even over wifi it wasn’t that far off. Completely recommend.
Dirtier than that. I have an addiction to Intel Nucs, and put Esxi or Proxmox on them. You can get 10gbe connections with a little effort and then it’s brilliant, but wifi is surprisingly usable.
One employer had an Apple-only policy which initially forced me into that world, and then I decided to test the Apple ecosystem out. For several years.
It was ok, the level of integration and hardware quality being even excellent, but overall Linux is a better set of compromises for me.
You have no idea what you are talking about. You can install any kind of distribution by copying over their rootfs. For example, I am running Void Linux on WSL2.
Yes sure it can, create a usable (chrootable) Linux From Scratch rootfs tarball and import it using wsl --import, and everything should be fine. Though don't forget to remove the kernels because WSL don't load them, instead WSL uses a custom kernel shipped with the Windows system.
One of the most frustrating thing is that WSL is not a booted Linux. Systemd and other things don't work out of the box even in WSL2. In WSL1 it is almost impossible to make Systemd work, and in WSL2 a wrapper is needed to create a SUID 1 (if I am not wrong) environment to emulate a booted environment. Other than this, the kernel used by WSL2 is a custom, Microsoft-tailored LTS version which may cause some problems if some program rely on very new syscalls.
People are voting me down but it’s mostly from misinformation. WSL runs standard Linux distributions, it’s the real deal, not an imitation or anything that could be considered bad. If you face issues with it it’s mostly because of a fundamental misunderstanding of the setup (for example the person trying to run systemd…), not because it isn’t a „real Linux“.
That's exactly what I mean by misinformed. WSL 2 isn't using a simple virtual machine the way it is generally understood. It is instead using a lightweight VM using Hyper-V, there is no booting time. IO in itself isn't slow if you move files to the Linux FS. What is slow is the communication between Windows <—> Linux filesystems.
I know and I was referring to the windows <--> wsl IO
It's nice to be able to run Linux in windows but it's like driving a Ferrari inside a Honda. You do it because its nicer than context switching when you have to drive the Honda, not because it's like driving an actual Ferrari
Any virtualization solution has the complexity of the sum of the environments being run simultaneously, the virtualization solution, and the surface where all 3 interact. Furthermore the guest solution inherently deals with the system at a higher level wherein you are stuck with how the host handles lower level aspects.
Such a system is simultaneously worse and more complicated as a necessary price for running both native windows applications and linux ones. In most cases you would be better off just running Windows or running actual Linux.
For example my Linux system has a ZFS root filesystem which provides a lot of interesting features including the ability to boot a prior version of my install at startup, along with 37 other useful things. Running a linux VM with that feature wouldn't magically port this functionality to the rest of the system.
My Linux system is vastly less likely to fall victim to a cryptolocker or other malware situation but running a vm under my Windows desktop wouldn't deliver this benefit.
My Linux system handles virtual desktops per monitor making it easy to swap one monitor at a time to a different workspace or all together if I please. Again neither running an app via a compatibility layer nor a fullscreen VM would provide this functionality.
WSL is Linux light without most of the interesting features and able to be withdrawn any given year by Microsoft. Worse it could be extended in ways that depend on windows features in a classic Microsoft move.
WSL runs a standard Linux side by side with windows. There are no “Windows part” overlapping with it. A way to think about it is that you have two system that share devices and file systems.
If you gonna do dev on WSL then make sure all of it is on Linux. Don't mix and match.
They still haven't corrected the SEVERE disk read speed when accessing a Windows directory. For example, if you use zsh/Oh-My-Zsh and have any of the themes setup that pulls git info for the prompt (over 99% of them do this), cd into a git repo on your Windows side: cd /mnt/c/repos/example_git and expect a minimum of a 1 minute wait for simple repos.
The WSL IP Address changes on every restart.... ALWAYS. So trying to setup a local web server for testing becomes a hassle.
That's wierd. Any server I run on wsl side can be accessed via localhost from Windows side. Been like this since the beginning of wsl if I'm not mistaken.
One issue that I've run into is that if you only bind to IPv6 in WSL, then you will only be able to access the service using IPv6 on the Windows side (i.e. ::1 works, but 127.0.0.1 doesn't). OSs usually automatically bind to IPv4 whenever you bind to IPv6, but that's not the case with WSL (actually, it does bind to both, but only inside WSL, not on the Windows end). Lots of programs (like node) bind to IPv6 by default, so this can catch people off guard and they wonder why 127.0.0.1 or localhost isn't working.
> cd into a git repo on your Windows side: cd /mnt/c/repos/example_git and expect a minimum of a 1 minute wait for simple repos.
Yes that will be slow, don’t use /mnt, anything under that directly will crawl. Use /home or somewhere else and save your work there. In VSCode you can access any dir and now there’s gui support so there’s no excuse to be using /mnt now. I believe some docs on WSL do tell you this
> The WSL IP Address changes on every restart.... ALWAYS. So trying to setup a local web server for testing becomes a hassle.
That’s weird, I’ve never had any issues relating to running a web server. Listening on a port in WSL automatically exposes it on Windows. I’ve not had to configure anything when restarting etc.
I occasionally run into an issue that breaks localhost port forwarding. I can access the service that's running inside WSL2 via its actual IP (172.#.#.#) when 127.0.0.1 breaks. But that IP address changes every time WSL instance restarts.
I've had to set up a scheduled task[0] to update an entry in my hosts file every time WSL is assigned a new IP address, which works well enough.
Excuse my ignorance here but why are you using its actual IP instead of just localhost or 127.0.0.1? When starting up a local server inside of WSL I’ve never needed to do that (unless it needs to be accessed somewhere else in the network)
I know the blog post says localhost redirection “often fails” but that’s just never been the case for me, and the citation it uses is just a link to the WSL issues page rather than a specific issue.
Localhost mostly works, and I use localhost when it does work.
But when it fails (once a week or so, and this keeps happening on 3 different machines running the latest/a recent Insider build), usually after the PC wakes up from sleep, I have to restart the PC to access via localhost again, which is undesirable for me because I have to spend 10 min restoring all apps & windows back to how they were.
Or I can use the IP address directly and map it to a friendly hostname, in my case `wsl` and keep working. This is done behind the scenes with a scheduled task, so when I see `localhost:PORT` doesn't work, I just retry it with `wsl:PORT`.
The reason I haven't linked to a specific issue is that there's just so many [*], and localhost sharing is still buggy and hasn't been fixed even after 2 years.
Fair enough, hopefully that gets fixed, my machine doesn’t go to sleep that often so maybe that’s why I haven’t been affected. My servers are not usually long-running, I only use them when developing so that’s probably why.
For future reference linking to the issues page when there’s over 1.2k issues is unhelpful to the reader, you’re better off putting that list into your blog post, or just pick one that’s the most useful like https://github.com/microsoft/WSL/issues/7492. I doubt the reader is going to trawl for every single issue to find the ones you’re talking about. It’s like saying “Google it yourself”.
When I had linked it in August 2020, it mostly had results for issues related to localhost forwarding, so it made sense at the time. But linking a live search wasn't the best choice, I agree.
I think linking to a specific issue is more useful, like you say. I'll update the post.
> It’s like saying “Google it yourself”.
Funny enough, almost all traffic is from organic searches :)
Yeah this is an annoying bug. It seems related to resuming from sleep. For WSL 2 one workaround is to just restart WSL; you're effectively rebooting the VM.
The whole setup with WSLg is pretty nice. Virt-manager works, but more importantly with nested AMD virtualization (which I believe is indeed limited to Windows 11; Windows 10 has nested Intel support, though) it was super easy to use macOS-Simple-KVM[0] to run a Mac VM, even if the FS performance isn't great and I don't have an extra AMD card lying around to passthrough for hardware acceleration.
WSLg requires Windows 11. I installed it only to try WSLg. The UI didn't scale with Windows scaling/DPI settings which was very annoying. Setting the display resolution reduced sharpness of all font. Not being a Linux user primarily and hating Windows 11 annoyances, regretted the upgrade and reverted back to Win 10.
I've noticed this too around here. Titles will be changed and their meaning altered, effectively putting words in the original author's mouth. Seems kinda unethical to me, but it happens all the time.
HN automatically edits out some patterns from titles. If it happens to you and you want to put it back, you can just edit the title again after submission and it will stick.
Check the guidelines, it encourages neutering headlines in multiple ways; believe the idea is people can form their own judgement on whether it's good or bad.
I found the headline here actually incomprehensible. Had Microsoft built Windows 11 on top of WSL? Built part of it that way?
What you'd means is "Windows 11 includes a revamped Windows Subsystem for Linux" if you wanted to removed the editorializing. But the article is editorializing - they don't like the rest of windows 11 and say so.
Without "the best" the title reads to me as that they somehow removed chunk of original Windows and replaced it with WSL. That's why I came to this topic. I thought to myself - really - so they now replacing windows code with Linux? Original title is clear and not clickbaity.
You can update to Windows Subsystem for Linux GUI while still using Windows 10, as described at https://github.com/microsoft/wslg . Emacs and gnuplot work fine, and that's all I need.
Sadly, is seems you can't get on Dev or Beta channel since June 24th (if you weren't enrolled before), so you won't be able to get the Insider build with WSLg on your existing win10.
Only if you install it from clean ISO and update.
Source: https://answers.microsoft.com/en-us/insider/forum/all/how-in...
I tried it now and you could only choose Release Preview channel which still has only build 19044.
Yep - seems like they originally did all the work to have it running in win 10, and then decided to take it away as a carrot to get people to upgrade to win 11.
I'm probably going to make 10 the last version of Windows I install.
You can also get this great feature in standalone version, for free, and without any Linux compatibility or Windows integration issues. Definitively recommend checking at ubuntu.com for it ;-)
For those who haven't tried gaming on Linux for a couple of years - things have gotten a lot better. Most games basically run flawlessly, with the notable exceptions mostly being multiplayer games with incompatible anti cheat solutions (and even that's recently gotten a lot better - EAC officially supports Linux now).
You run just native Linux games, and the times they aren't available, you run them using the Proton compatibility layer (built right into Steam). There is also Lutris for Windows games not available on Steam.
I do a lot of things that use usb rs232 adapters or things that look like them, and I support both linux and mac effortlessly (and even freebsd and any other unix) and none of it is possible under wsl2.
If all you want to do is write web services, there are native windows versions of all the scripting languages and web frameworks.
If you have some reason to actually use linux, then just use it. It's free and well-documented. Nothing's stopping you.
If MS actually cared about supporting developers, then they would provide a 1st class form of Wine, so that windows apps could be run from within linux.
Since they don't do that, this exposes that they don't actually care what would be convenient for you, and so, you should decline to work with people who don't work with you. That's how you can see through bullshit.
You should reject their products and sales pitches until their behaviour actually changes, and disregard all the lip service, "gifts" with strings, and excuses for dark patterns. Use their stuff when you must, but under duress and seeking other options at every turn.
Attaching a small computer running some other code besides wsl2, is not an answer to the charge that wsl2 can't access usb. It still isn't accessing usb. The code on the usb server device is.
That's not semantics. What I mea is:
For any low level code of my own that I wrote to run on *ix, I can just as easily run that same code on an arduino or pi, and wsl2 and the usb server hardware would just be pointless extras.
For any code that I didn't write and can't port (say, closed source xilinx fpga programmer) there are native windows versions which work better and are better supported than the linux version anway. So here too wsl and usb server harware would both be pointless extras and backwards.
People are bamboozled and forgetting to step back and remember what the point of doing something even was in the first place.
I say it's a mistake to even get sucked in to the question of how to work around any deficiency in wsl, instead of asking why you even care or want to run a linux app on windows in the first place.
Most people prefer Windows over Linux, even if they work on software that is deployed to Linux, or maybe they just have a single Linux application that they need.
In my experience, there are a lot of tools that are easier to use on Linux, or are Linux-only, without a proper Windows alternative. WSL(g) allows seamless experience for running such apps, with some caveats (such as USB access.)
One can wish. The more likely end game seems to be one where no one actually bothers using a real Linux installation, as you can do everything you'd need Linux for on your Windows machine. But your Windows machine has become a 'trusted computing' device that only boots and runs most software if it 'hasn't been tampered with'. Say, by actually installing Linux on it.
> The more likely end game seems to be one where no one actually bothers using a real Linux installation, as you can
do everything you'd need Linux for on your Windows machine.
I'm 100% convinced this is Microsoft's actual goal, and it's going to succeed very quickly at least on the desktop.
Then one day, when a good number of apps will conveniently require WSL to run properly, or to run at all, MS could build their own Linux distribution containing licensed parts of Windows (drivers for closed hardware, graphics libraries, etc.) de facto displacing the original Linux from every other field of application including embedded, automotive, industrial, etc.
Right, they've ceded the server space to Linux but want to cement Windows' position on desktops in a Linux-server world (maybe regaining some ground from Macs in the process, since this is one of the reasons they're so popular with devs)
Because those devs have proven they only care about POSIX, if it Linux or something else they don't care at all, otherwise they would be giving money to Linux OEMS.
> Did we mention that a TPM isn’t going to protect you from UEFI malware that was planted on the device by a rogue agent at manufacture time?
I find this to be a pretty weak strawman, and one that not many people would consider to be part of their threat model (and if they are, they'd just purchase the part from a brick-and-mortar store so that, if there is malware, it's non-targeted).
Microsoft is mostly doing this for their endpoint security enterprise customers. The objectives aren't exactly hidden, either:
- Don't want anyone to be able to get data off of a bitlocker-encrypted drive[0]
- Don't allow things like O365 login credentials (including temporary auth tokens) to be pulled off a drive[1]
- Prevent thunderbolt 3 DMA (eg. from a rogue usb on the back of the computer)[2]
And yes, they probably also don't want people to keep hacking online video games, which is why Riot uses TPM attestation as an additional security measure to preventing people banned for hacking from evading bans in Valorant[3].
> And yes, they probably also don't want people to keep hacking online video games, which is why Riot uses TPM attestation as an additional security measure to preventing people banned for hacking from evading bans in Valorant[3].
on most of my gaming boards you buy the TPM and plug it into the board like you would a USB connector
total cost: ~$15 (ignoring currently craziness)
if I'm a wallhacker/aimbotter how would this stop me?
Usually they ban every part they can get a SN/unique ID for, TPM being just another signal. Modular TPMs are being phased out anyhow, with new AMD and Intel chips having it built in.
As I said, it doesn't actually do much for anti-cheat besides act as a hardware ID for bans. You can still run cheats and hack your own system with the TPM fully in-tact, it's just another method to increase the cost required to get back in after being banned - now you have to have an entirely new CPU every time, at least once they fully drop Windows 10 support in \d{2} years.
There's quite literally only one potential exploit that would work for the purposes of ban-evasion: extracting the private key. Since every CPU is signed by Intel/AMD's CA, the Riot servers require your CPU attest by signing a secret message, so you'd need a surefire way to extract the private key from other machines to then spoof TPM responses using your existing hardware - that, or you have an active worker agent on other PCs proxying the attestation process.
And, if you were actually able to find a way to extract the private key on TSMC's newest process nodes, there are much more profitable ways to use that knowledge.. ie. selling it to zerodium or nation state actors that are eager to decrypt iPhones.
Reliable chain of trust combined with hardware bans is a pretty high entry barrier, though. You need to change most of your PC hardware to not be banned again, and HWID spoofers are also cheats that have to squeeze through the same filter.
So no, it's not a completely invalid idea. At some iteration, it will make the anticheats even better, and they already work pretty well (regardless of players' oversized perception of cheaters running unpunished). A proper chain of trust + hardware signing of mouse input + kernel hardening + hardware fingerprinting will make most cheats irrelevant (including the ML-based ones). You'd have to mod your hardware to be even able to run cheats; which is also preventable, just ask console manufacturers.
The only downside is, this would turn your computing device into an appliance remotely controlled by several companies. And the gamers will be perfectly happy to have it at that, because everybody hates cheaters, and even talking about that is stigmatized.
How many cheats have you seen on consoles? I guess none. Besides maybe an occasional lagswitch, or a packet manipulation/sniffing thing, but that's due to developers' lack of expertise, because all of that is avoidable. That's because consoles are locked down completely. So yes, it is useful if implemented properly, and if your PC is totally locked down. It would be silly to deny that.
(and BTW my bank already does that, requiring non-rooted stock firmware for its app on mobile. With Samsung for example, rooting amounts to warranty loss; maybe in EU it's different, but I'm not in EU)
Whether you or me say yes or no to TPM is not hugely important. Most people are absolutely happy to trade freedom for convenience, and it aligns with Microsoft's incentive to lock everyone into using their products. This isn't new at all, I've seen loss of PC modularity and openness discussed since late 90s.
However, there are several counterbalances for that incentive.
1. Platform fragmentation, the major one. This alone can delay the inevitable for any amount of time.
2. Backwards compatibility.
3. PCs being used for many purposes, not just as an appliance. This is a minor but noticeable one.
4. Some groups advocating for the platform openness. This one is of little relevance in practice.
Expecting the x86/MS platform to stay open forever is not realistic, because the incentives are biased towards locking down. How much time it'll take to get to that state is a different question, though. It haven't happened yet is all that can be said.
Stop playing devil's advocate. It's worse than useless because it takes away the best part of computing: our freedom to own, operate and modify.
> Expecting the x86/MS platform to stay open forever is not realistic, because the incentives are biased towards locking down.
It's only unrealistic when these fatalist certainties are pushed as inevitable. Free, live free is more than the name of a novella, it's an act to be performed, to fight for.
So yes, say no to the TPM and other such measures such as SafetyNet, which are worse than useless to the most important endgame, to live free.
The purpose of my computer is not to protect anyone else's business model.
If there is no way to deal with cheating at games other than relinquishing ownership, disposition, and functionality of my own hardware, that is not my problem, and, it's not true anyway.
> The purpose of my computer is not to protect anyone else's business model.
It is if you want to use someone else’s software that requires it; you can’t have something on your terms just because the cost of using it is paid in something other than fiat currency.
Neither would I, personally. But Linux is hard to manage for corporations especially when people use different distros.
These things may be required by the security department. So WSL is a nice carrot there for them. Management through Windows and Linux on top of it. I worked in endpoint management so I can see the appeal.
However personally I really like the way Linux is free of corporate influence. In fact I switched to FreeBSD for that reason as I feel that big IT is getting too involved with Linux.
But that's a startup, I was speaking more of enterprise (in which I work). We have much more rigid security rules, and we're also a much bigger target for bad actors.
I am pretty sure that WSL was implemented to avoid the gradual loss of users, most of them being developers, to Linux. There is very little reason for existing Linux users to adopt Windows due to WSL.
Been seeing a lot more developers jumping from mac to Linux and even windows lately. If something more competitive to M1 comes from amd and Intel we should expect the trend to continue. I think the tide is moving the opposite direction now than 10 years ago.
It's not going to happen. The Windows NT kernel is one of Microsoft's greatest assets; in fact it's better designed than Linux on many axes. Until recently it had better support for async I/O (and the developer experience of io_uring may still not measure up to IOCP under Windows), it was designed for multithreading, and it has a more advanced security model. It also has a more advanced driver model, which is to say it has a driver model at all. The fact that there's a standard ABI for drivers means hardware Just Works under Windows and is still incredibly fiddly under Linux. Microsoft essentially has stewardship over the entire PC platform because they play very nicely with OEMs and make hardware easy for Windows to support.
They are not going to give that up. It'd be the OS-kernel equivalent of giving up Alpha for Itanium.
If anything, the future of Linux is to be a guest under Windows NT. How many Hackernews have I heard repeat the mantra that the best Linux desktop environment is WSL?
Microsoft defines the hardware specification for Windows compatible x86 and all the OEMs that manufacture x86 servers (even ones most likely destined to never run anything other than Linux), desktops, and laptops certify against it.
There are no other standards for what a PC is. All of the open OSes, including Linux, just piggyback on Microsoft's standard. (And if anyone believes that the various giants building custom ARM processors aren't hoping that they can get dominance based on their own spec so that they can displace Microsoft and wield the same power, think again.)
> And if anyone believes that the various giants building custom ARM processors aren't hoping that they can get dominance based on their own spec so that they can displace Microsoft and wield the same power, think again.
ARM already has their own spec for that, the SBSA. So the one wielding power will most probably be ARM itself, not one of the "various giants building custom ARM processors".
I looked around briefly but I mostly see articles saying "SBSA is eventually a goal" for various products. Is the ARM ecosystem coalescing around SBSA?
I think Intel designs specifications for x86 platform, not MS. And someone should do it after IBM quit doing it.
Otherwise we would have the same level of fragmentation in x86 hardware space that we have in Linux software or that we see in ARM space. Much less compatibility.
Don't forget, before the last step, they'll first subsidise universities and schools to explicitly train their students to have a dependence on WSL, so that they can maximally exploit their users when they introduce a fee.
Except Microsoft doesn't offer WSL-based hosting, and they currently don't offer their own Linux distro. This is a complement to their Azure business, basically.
That would only happen if sometime between 2027 and 2030 they buy out both Linus and the Linux Foundation, re-write all GPL code not owned by the Foundation, and re license Linux to some other license other than GPL.
I think it would be much easier to just design another POSIX compatible OS and give users some incentive over Linux: stable ABI, stable API, software and hardware that just works out of the box and don't need configuration and maintenance.
I feel like this is somewhat -- err, not a lie, but not quite the truth. There's a word for it but it's eluding me atm -- "misadvertised"?
The headline of the article is: "WSL is finally easy to install—and offers automatic sound/graphics support."
All of this stuff was available before Windows 11, to anyone/everyone.
I know, because the way I installed WSL2 on Windows 10 was the way mentioned in the article ("wsl --install"), and ditto for using the new/experimental Linux GUI stuff.
This was readily available for anyone who bothered to install an Insiders ISO, and there are tutorials and articles about this for Win10 specifically:
That is a big no from me.
I am not interested in being Microsofts unpaid alpha / beta tester, when I get in return in a less stable operating system and some unfinished bits.
I hate how Microsoft keeps running huge blog campaigns about this or that new features and once you dig into it "Just install the Insider ISO".
No. Tell me when you are done developing and it is ready for production.
They're release cycle is 6 months, I'm perfectly fine with them blogging about what's coming in half a year and available to try out early. Nothing to get worked up about.
Their release cycle is way more opaque than that. I had to wait literally over a year (after "release") for them to let me install the upgrade to Win10 that would give me WSL2 (I don't remember what version that was). They control the upgrade cycle, and I couldn't get it to upgrade.
Maybe a clean install would have helped, but screw that.
You can always run the update assistant to immediately update to the latest stable release. What you were waiting ~1 year for was your build to near end of support and be forced to a newer version.
I'm not sure what "update assistant" means, but I opened the Settings for Windows Update, and manually told it to check for updates, and it refused to give me anything. I wasn't waiting to be forced, I was proactively seeking an upgrade that all their blogposts said had rolled out a year ago.
Anyway, as far as I can tell all this means that they're knowingly writing false statements, also known as "lying", in case I was too subtle.
Now, maybe their lies are reasonable. A rolling release makes sense, to detect bugs in smaller segments of the population, and manage them. And maybe they have some reason to believe that my moderately old hardware still has driver issues that need ironing out, or something, I don't know (the hardward in question is an expensive-ish highish-perf tower, but it's not that recent). Managing horrifyingly multiplexed complexity across a billion hardware vendors is literally the main value that MS provides, so I don't really want to second-guess them. After all, this is why I'm wanting to use WSL.
But the fact remains that their marketing about when things will be available is full of false statements about availability dates.
I had one problem after another with WSL. That made me question why I was even bothering when I could just use Hyper-v instead. I still don't get why you would use WSL. I guess MS is trying to make a more seamless experience for people who aren't Linux savvy but want some of the capabilities?
Didnt Microsoft tell the press there would be no more Windows versions after 10. The title is "Part of Windows 11 is a revamped WSL". This begs the question, What is (new in) the other part. It is almost as if this marketing is being employed as a distraction.
The disclosed changes to Windows 10 that comprise "Windows 11" sound non-critical, like Firefox changing its UI for the umpteenth time without ever disclosing underlying reasons: https://en.wikipedia.org/wiki/Features_new_to_Windows_11
Surely you don’t equate installing Insider Builds to having it built into the OS itself by default?
That’s like saying my crappy economy car can win races because I could install a new engine. Sure it’s available to me already but that doesn’t mean I always want to go through the effort to get there.
For quite some time, I just connected to the Windows X client I had running on the Windows side of the box, and it worked fine. Now with Windows 11, this is better.
Got the new Surface Studio Laptop two days ago and am very happy with it.
When Microsoft sold Xenix, I recall there being a clause saying they could never compete in the Unix space again. I've searched for that but haven't been able to find anything definitive.
It did! The first assignment in my OS class was to write a shell for Linux. The second one was to port it to NT. The requirement was to make it work with minimal changes thanks to the POSIX subsystem.
WSL2 on Windows 11 uses the same file translation layer between the host FS and the VM FS as it did on 10, so pretty slow. The big improvement with WSLg is that now you can run desktop linux apps (such as your IDE) within the WSL file world, which should give you near-native FS performance as long as you're not accessing /mnt (and if you want to browse those files on Windows, you can always open \\wsl$ in file explorer).
There are issues raised on github (issue #272/273) so I'm guessing it's mostly driver related. Other than that it seems fairly seemless, the apps appear in an ubuntu folder on the windows start menu.
I tried installing wsl twice during the beta and both cases it failed to boot afterwards work a boot error about some threading stuff. Needing a full reinstall in both cases
Now that it's live I'm very reluctant to do this again and have to set everything up again if it fails. Though I suppose I can just pull an image off it...
That sounds like the hyper-v issue related to shared memory between the CPU and integrated graphics. Supposedly changing the value of the "UMA buffer size" in the BIOS works around the issue.
I have to wonder: if, as a user, one of your primary concerns is the WSL performance on Windows, what’s stopping you from just running Linux? Or at the very least simply dual boot for the cases where you truly do need something Windows exclusive, which these days seems to be less and less?
I think WSL3, or should we call it USL from UNIX subsystem for Windows, should be a POSIX layer on top of the NT kernel. That way we should have better performance and better integration.
You might be thinking of UMSDOS. It was pretty neat; it allowed for instance C:\LINUX on DOS to become the root filesystem when booted into Linux (which also booted from DOS through LOADLIN), with the DOS root directory (C:\) becoming something else within Linux (I don't recall the exact path, it probably was something like /dos or /umsdos). That allowed you to install Linux like any other normal DOS program (yes, it was normal under DOS for a program to completely bypass the operating system), without having to re-partition/reformat or even change the bootloader. It's the main reason that I ended up using Linux and not one of the BSDs; I could try out Linux without any permanent changes to the current MS-DOS install (and some time later, I noticed that I was nearly all the time running Linux instead of DOS, so I re-partitioned to give Linux a whole ext2 partition).
Yes, I find it a far superior dev setup than a standard corporate Mac. Everything about Windows is inferior to macOS, but I spend 90% of my time in a shell, and having my familiar Arch setup and tools vs crumbling macOS Unix is well worth the trade-off of minor annoyances like terrible start menu and file explorer. Windows Terminal is actually pretty okay.
Anyone using docker desktop on Windows has been using WSL. WSL2. I use debian as my WSL2 distribution, it's OK as a development environment enhancer in that it removes the need to learn Powershell to do anything productive with a CLI.
It's not as good as running Debian directly. WSL2 was the motivation I used to switch to debian full time on my main development machine, so I'm not sure Microsoft can safely assume that having WSL2 means people won't dual boot or simply move on.
Yes it's great, I can spin up WSL to get a full Linux environment for web development, mimicking my production environments. I can use Windows IDEs to edit and browsers to test. Even tools like Browsersync work seamlessly running on Linux and syncing on Windows browsers. When I'm finished work, I can shut down the dev environment and play my favorite Windows games without having to dual boot :)
I use WSL daily on my desktop and laptop for work - full stack development and sysadmin/devops. Familiar CLI tools available right from my Windows desktop are a great thing.
I’m really, really happy with WSL since I can use one powerful PC for work, music recording and occasional gaming.
I've used it without issues for years. I also used Crostini containerized environments on my Pixelbook until I just couldn't stand the terribly inefficient 7th gen Intel processor on it.
At work, we support more than half the team on it. Company gave a stipend to build or BYOD, and a lot of folks opted for Win10 w/ WSL for gaming purposes (afaik, Wine didn't support anti-cheat in several titles until recently?)
For me, being able to confidently use any non-Apple device, have full driver support for touchscreen/wifi/speakers/etc. and maintain containerized development environments, is extremely attractive.
- Great scripting environment for a number of "used weekly/monthly" tools (stuff to test backups, automate some processes) Maybe this is more an indication of my lack of PowerShell knowledge to do the same.
- Good way for me to test most Linux binaries our scripts / code might use (wkhtml2pdf seemed to experience this, might have been because any windows bins we found were out of date/mislabelled.)
- Ansible "Controller/Host" , My god, my favourite usecase for WSL. It just works. Still have to deal with Ansibles bytespam errors, but you don't need a whole VM to just to update some servers.
I'm building two things, in two separate WSL environments, for two different clients, right now on my Windows 11 Microsoft Surface Studio Laptop, as I'm looking at Hacker News. It works great.
I use WSL all the time, have for years. It's excellent. I much prefer it to my last setup which was trying to MacOS act like a modern Unix.
Not sure what you mean by "man pages". The full Linux man system is there, of course. WSL itself has very few commands of its own. There's wsl.exe but you almost never use it. It has no man page per se but it does have a good --help string as well as online docs: https://docs.microsoft.com/en-us/windows/wsl/basic-commands
Been using since yhe beginning of wsl without major problems, while working on music production without dualbooting. I probably haven't boot up linux for about a year.. this strategy apparently works for many.
Yes, me. Windows is by far my preferred desktop OS, but on the command line I greatly prefer Linux, bash etc.
So I use graphical stuff in Windows (JetBrains Rider, VSCode, MS Office, Firefox etc), with all my dev files in WSL2. Works great, the vest of both worlds!
Previously I was using Git Bash on Windows, which actually worked pretty well too (apart from occasional file system path issues), but WSL2 takes it to another level.
If you don't mind being coupled to an arcane filesystem, networking shitting the bed when you connect to VPN, settings you can't change, software you don't want but shouldn't remove, functionality that is simple to configure in KDE split across god knows how many UI/UX schemes and usually requiring third party applications, worse performance than native, and best of all having your open computing environment running virtualized on top of a closed system requiring you to migrate product keys/activation/whatever for a simple reinstall - yeah, WSL is the best Linux distro for you, bud.
Well if you explain to me how can I use Linux desktop without configuring it a lot, how can I run Visual Studio, Photoshop, Lightroom, Adobe Xd and 3ds Max on it natively, I will give it the 101th chance.
On the desktop I care about applications more than I care about the operating system. The operating system should just not stay in my way, complicate things for me or consume my time.
This is it for me to. I have tried so many times to jump to linux and it never works out. I mostly use macOS for unix stuff because it works without breaking all the time. Windows for gaming and some light terminal use in WSL.
Haha, it's not about games. It's about remote device management, employee tracking and less "bytes _you_ own", more "bytes corp owns". I actually quit my previous company over this. Now they all work on windblows, their open windows tracked, their hours accounted. And I laugh in linux.
You do? That wasn't stated in the above post. If you've also tested this, and have different results, then I guess the truth is somewhere in the middle.
They will keep trying and failing. The only reason people still use Windows is the exclusive software and games. Once someone gets proper virtual GPU working for most GPUs then I assure you the preferred choice will be the other way around.
I had had them setup with WSL as a way of moving forward, here's a few things I noticed:
- different home directories, the different distros are accessed as network shares. It can get confusing where "Home" is unless you explicitly stumble a few times and verify always where files are.
- Issues with CR/LF. You need to be careful using native windows tooling and having saved files in Windows line endings vs Unix. For non-savvy users the content looks correct but on Unix the tooling will fail in weird ways.
- Visual Studio Code integration was actually excellent, my coworker ended up using VS Code to do Filesystem operations more so than using Windows Explorer.
- The new Windows Terminal is fantastic, hopefully its bundled with Win 11.
I'm really grateful for the technical capabilities that WSL brings out. I hope the dev teams are focused on fixing the subtle UX issues that trip up users.