Growing dependency on a few operators might increase the risk of single point of failure. In order to avoid this risk and to make Internet services more resilient, network can be segmented in such a way that issues in one segment have no side effects in other segments/parts of the network, which would allow for uninterrupted use outside of any affected areas. Furthermore, the service quality degradation can be avoided by providing multiple independent alternatives access networks (multihoming), and other means of smarter asset distribution as presented in this discussion.
Many resources on the web and the wider internet are no longer self-contained, but have hard-coded dependencies on resources delivered by third parties, such as content delivery networks and cloud providers. These are used for critical features such as navigation. An outage somewhere in this chain can ripple an avalanche of unintended outages throughout many different systems. An example is the whois of a registry that needs to be able to be used in case of emergencies to notify the administrators of a certain domain, but turns out to depend on 3rd party Javascript sources.
Resilient Internet services can be achieved by ensuring high availability, openness and disruption tolerance as also detailed here. This will ultimately improve operational efficiency of the Internet, lower the operational cost, increase privacy by removing central vantage points, improve disaster-readiness and ensure business continuity and social connectivity.