The truth is slightly different; much of the information on a web page, even YouTube, is text-based. You can use this to find and filter data to suit your needs. On top of that, if you want to avoid tracking, it could be good to consider the offline reading of many webpages. Another reason is to collect information for scraping projects. Scraping a web page means finding the information you need with software, usually machine learning, to conclude. If you need help with a programming problem, those sites are made to work with text-based browsers, so if you need them, you can stay on the command-line.
The designers who built W3M did it with more things in mind than web browsing. The w3m browser is also a pager, making it possible to view things like images using it. You can use it as a text formatting tool to typeset your html pages. It comes with an image viewer that can view many formats of images. Many other programs use this to display images inside their terminal programs.
W3m has a cousin in the emacs-w3m. This nifty software browses the web inside of emacs; whoever leaves? It uses w3m to render the pages. Being emacs, the install is the regular one; put the required statement in your config. However, it also requires the w3m binary installed.
Apart from wanting to browse in the terminal, you may want to download files and handle them separately. This can be very useful to make downloads faster, and many times you can circumvent geo-locking. If you are looking for a media file, you may download the code and search it with grep, tail, and cousins.
The first tool you should know about is wget2, the second! This tool can download a file, but it can also download several files. The most valuable part of the command is that it can mirror a website. When you do this, you can also set a level for following links out of the site. These downloads can take a long time, so you have the option to run them in the background. If you have bandwidth issues, you can also limit the bandwidth you use. If you have excellent bandwidth, look at puf, which can download files in parallel.
You can use cURL to download files, but the big difference to wget is that cURL opens the site with the code. If you run it on an arbitrary site without options, you will see the HTML code. Some areas have taken advantage of this; if you open them, you get the terminal result. A great example is wttr.in, if add your location as a directory (curl wttr.in/Stockholm). This command also has many extensions for program languages like Python, Rust, PHP, and many more.
If you have your own server and need to transfer files, use sftp. This is secure and should be the only way to move your own files between systems. The client is part of the ssh, secure shell system. You use ssh to login into your remote servers.
Whatever you do, consider if the only way to be on the web is to use a graphical web browser. Most sites will look odd on the command line, but usually, you can get the information you need. Some can even show you graphics in the command line. They are all much less resource-hungry than the legacy browsers you are used to. The tools to handle files over the web are powerful after learning the usage and the features. You can also use them for your programming projects, especially cURL with a library and many connections to programming languages.