>>1341I do not agree and here is why;
The internet as original designed had two major features that made it very important for the times it was designed in. 1) It allowed two or more computers to talk to each other over unreliable connections. 2) It allowed data to be stored in multiple locations around the United States/the west where it could be recovered if needed.
Never forget the internet was originally designed in the cold war era with the goal of ensuring that no matter what happened there would be at least one location left that would continue to run the Government and jump start other computers/server should they be destroyed or knocked offline. It was designed in a way that these computers could communicate even over the most spotty and slow connections. Landlines, radio links and even through single sideband Morse code if needed.
The TCP/IP standard itself has several built-in features designed around there being near unlimited nodes within the global network. All of which are ignored and go unused on the modern internet due to censorship. I assure you the Government's important stuff isn't being hosted like the modern web is hosted (or your personal traffic). The important data is being mirrored in untold numbers of servers and backed up by servers who's only job is to pipe data between them.
In fact, the internet and software running on top of it always works better, faster and is more stable if you take advantage of the inherent duplication features of TCP/IP.
Up until about the late 90s/early 2000s most everything online was designed with this in mind. Standards were followed to ensure different systems could communicate with each other. Data was mirrored in multiple locations whenever possible. Before they turned http into something it should have never been we saw this with all interactive software. Just compare a network of email servers like USENET to a forum, imageboard or modern social network. USENET can never be taken down and even if your local mail server goes offline there are millions others you can connect to with the same content. If you push content to one server in the network it's almost instantly mirrored everywhere. This is very different from modern web applications where you're screwed if the website goes offline. Even if the website is backed by an army of servers it becomes impossible to access it should its reverse proxy/one public IP is knocked offline somehow. Mail servers aren't like that. There are millions of IPs with the same content to connect to instead.
In the late 90s/early 2000s we saw the rise of p2p software. It allowed people to share and host files for free that were mirrored globally almost instantly. We'd been using mail for this for a long time but the new p2p networks allowed you to share large files without breaking them into small pieces. Soon after we saw the backlash from the copyright mafia. They sued everyone and lobbied to have all the p2p networks shut down. So now instead of streaming video from a global swarm of peers you're forced to access it through the web using a service like youtube. Where it can be censored and lost forever.
Centralization was forced upon the internet. The internet was never designed with centralization in mind and it makes everything worse. Some examples;
Even with the censorship issue aside centralization ends up causing far more data to be transported over the limited global bandwidth. In a true p2p network or even a semi-p2p network high traffic data is mirrored on local nodes. Thus allowing the user to fetch it from a server nearer to their location. In a centralized model this data must be transported long distances every time a user requests it. We see companies like google and netflix leasing so-called edge servers in ISP data centers all over the world. This allows them to avoid this problem. But it's a p2p for me and not for thee type of situation. On a free internet those edge servers would host everyone's content. Whatever is viral today would be sitting on them in a cache.
The cold hard truth is they centralized the internet for censorship/control and now they're attempting to justify it by claiming it's a better model. It's not. We've known since the 1920s it's not a better model for this problem. If it were better we wouldn't have designed POTS and radio networks the way we did. Computers are just the same thing with more bandwidth.
Another thing I should mention are protocols. We're supposed to have multiple protocols like http for different uses. We used to have this before "web 2.0". But now everything is forced to run over http. They do that again because it allows them to control content. If your interactive network aware applications ran over a simple protocol that wasn't in the web browser it would cause a lot of problems for them. The last thing they want is multiple standards designed by hobbyist. We've stifled innovation for the last 2 decades all in the name of censorship and control of the global network by a small number of people.