1. For now
2. That kind of traffic mostly died out with plugins like Flash, streaming media today is usually encapsulated in small frequent chunks over the same https channels as the rest of the page are delivered as that's what's available in JS/native. WebRTC reintroduced some UDP stuff and can be used for streaming but is still mostly used for peer to peer calls.
3. https://gitweb.torproject.org/tor-browser-spec.git/plain/pos...
2. So there is some non-TCP traffic. What happens when you load that page in Tor Browser, for example? Does it leak back to your clear Internet connection? Is it simply dropped? This seems like a critical issue.
3. Thanks. Do you know when that was written? To save others clicking the link and finding the applicable section, I'll paste it below. Designing and building your own protocol for Internet transport, compatible with the entire net and performing competitively enough to be usable, sounds like quite a project for a small organization. Note that Google didn't do that; they used UDP for QUIC.
7 Tor Network Compatibility Concerns
Our final area of concern is continued compatibility of the Tor network with future versions of the HTTP proto-
col. It is our understanding that there is a desire for future versions of HTTP to move to a UDP transport layer
so that reliability, congestion control, and client mobility will be more directly under control of the client user
agent.
At present, the Tor Network is only capable of carrying TCP traffic. While it will be possible to support the
transit of UDP datagrams using our existing TCP overlay network without significant anonymity risks within a
year’s time or sooner, it is unlikely that this level of support will be sufficient to warrant the use of a finely-tuned
UDP version of HTTP rather than a TCP variant.
Long term, our goal is to transition the entire Tor network to our own datagram protocol with custom con-
gestion and flow control to better support both native datagram transport and end-to-end flow control. However,
additional research is still needed to examine the anonymity implications associated with this transition[12]. Our
present estimate is that a full network transition to UDP is at least five years away.
We are also concerned that even after a full network transition to a datagram transport, it is likely that the
congestion, flow, and reliability control of a UDP version of HTTP may still end up performing poorly over
higher-latency overlay networks such as ours.
For these reasons, we are especially interested in ensuring that overlay networks are taken into account in
the design of any UDP-based future versions of HTTP, and also prefer to retain the ability to use future HTTP
versions over TCP, should the UDP implementations prove sub-optimal for our use case.
It's easy to get a webrtc fingerprint just using a public stun server, maybe people smarter can deploy their own. I've used it in our ad tracking js.
I'm not sure if Tor Browser turns off by default, searching found this one ticket which suggest that default flag but maybe it's not implemented out of the box.
> I'm not sure if Tor Browser turns off by default, searching found this one ticket which suggest that default flag but maybe it's not implemented out of the box.
It does and it is removed at compile time since that ticket was closed (i.e. that was a build flag not a runtime flag).
> streaming media today is usually encapsulated in small frequent chunks over the same https channels as the rest of the page are delivered as that's what's available in JS/native
QUIC and HTTP3 are great technologies, but they are never likely to become the only protocol a service supports.
For one thing, convection to a website via one of those protocols first, and then a header informs the client that it can reconnect via QUIC/HTTP3. IE they have to have a working http 1 or 2 webserver first.
UDP is disallowed in many many places, and many ISPs treat UDP as hostile and rate limit it.
In the places it works, it provides some benefits. But we're unlikely to see it take over as the sole protocol any time soon.
Not everywhere. FTP-over-TLS is secure, standardised (RFC4217 as updated by RFC8996), and in some environments is still preferred to SFTP, particularly mainframe and minicomputer environments. FTP, due to its age, has a lot of "legacy" features which mean it can work better with non-POSIX filesystems used on mainframe and minicomputer systems than SFTP can. In principle you could add extensions to SFTP to improve its support for non-POSIX filesystems, but why bother when FTP already has very well-established support for that?
Another area in which FTP is still preferred is transfer of very large (multi-terabyte) scientific datasets. GridFTP has defined FTP extensions which permit these transfers, including encryption and striping of files across multiple connections and servers (so multiple servers can cooperate to simultaneously transfer different portions of an extremely large file). SFTP has no advantage for this application, and why bother redefining those extensions over SFTP when they work perfectly well over FTP? The main competitor to GridFTP is not SFTP, but rather proprietary solutions such as IBM Aspera. GridFTP actually supports SSH as a transport, but even then the file transfer protocol is based on FTP not the binary SFTP protocol.
Similar comments apply to TELNET. TELNET-over-TLS is secure, and still preferred in some IBM environments, because there are established protocols for passing 3270 and 5250 block mode terminal data streams over TELNET. Again, no reason in principle why you couldn't define similar protocol extensions for SSH, but why bother when TELNET works perfectly well for this application? And if you really want to use SSH instead of TLS as a transport/security layer, nothing stops you from tunnelling TELNET over SSH.
Thanks for all the knowledge. What is your interest in these protocols, out of curiosity?
Deprecated doesn't mean 'wiped off all computers everywhere'. By that definition, name something that is truly 'deprecated'? An interesting trivia question. I think we have to exclude rare tech like prototypes.
> Thanks for all the knowledge. What is your interest in these protocols, out of curiosity?
Curiosity, yeah, pretty much. One day I decided to read the TELNET and FTP RFCs and became fascinated with all the historical cruft in them. I've also long been enjoyed studying IBM mainframe and midrange systems, they are their own somewhat alien world – most of that study has been limited to reading manuals, although I have mucked around with MVS 3.8J under Hercules (which unfortunately doesn't really have TCP/IP networking, or when it does it is some hacked-on thing with little in common with how TCP/IP actually works on MVS whether today or historically).
> Deprecated doesn't mean 'wiped off all computers everywhere'. By that definition, name something that is truly 'deprecated'? An interesting trivia question. I think we have to exclude rare tech like prototypes.
There are many systems which we know nobody still uses for production use, only for hobbyist / retrocomputing uses. A famous example would be Multics, at its peak it had over 50 production sites, the last production site was shut down in 2000, it took over 10 years between the last production site being shut down and an emulator becoming available so anyone could run it.
By contrast, people still use FTP and TELNET every day in production. Neither is inherently insecure, because both can be used over TLS. The majority of open source FTP/TELNET clients/servers never added TLS support, but commercial/proprietary implementations targeted at IBM mainframe sites do.
1. Is that (still) correct?
2. Can't web pages include non-TCP traffic, and if so, is it routed via Tor? For example, doesn't some some streaming media use UDP?
3. QUIC doesn't use TCP (deliberately, I think). Won't that affect Tor's long-term viability if everyone eventually moves to QUIC?