Net neutrality and bandwidth

While attempts by telcos to meter the Net are noxious and need to be opposed, technology advancements will neutralize them long term. Telcos want tiered service, charging more for top tier, but the proflieration of ever-faster and ever-cheaper bandwidth renders this a moot point.

Fiber optic is blindngly fast, cable modem companies keep ramping up speeds, and cell phone providers now offer city wide connections for laptops. In a few years we’ll look back at current conditions as way primitive. Kids will say, “gee grandpa, you mean you had to go somewhere to get a wireless connection?” They, of course, will be used to anytime, anywhere, ultra high speed connections.

Thus, it won’t matter if telcos charge more for top tier. Bandwidth will be ubitquitous, everywhere, and cheap. Nor should there be too much concern about the Net getting locked down. It’s become too much a part of business and commerce for that to happen.

We should absolutely be vigilant and fight any attempts to meter or lock down the Net, but long term, they won’t be able to do it anyway. It’ll become like the telephone. Always there, always on, and modern life would not be the same without it.

3 Responses to Net neutrality and bandwidth

  1. Network Cabling Company Tue, Sep 26, 2006 at 4:32 pm #

    Great observation! Our company does cabling installations nationwide and fiber optic cabling is in great demand! Major technology advances happen every 2-3 years. Nobody should be able to “meter” teh Net!

  2. You've Got To Be Shitting Me Fri, Jun 29, 2007 at 4:28 am #

    Look forward to higher prices, content filtering and an end to the little guy who wants to start a website. Anyone who ever tried to get content using a cellular provider knows EXACTLY what we can expect in a few years: $3 per 256K upload to your Flickr site.

  3. Cameron Fri, Nov 16, 2007 at 12:54 pm #

    Bandwidth of fiber optics is irrelevant.

    When the network moved from copper wire and microwaves to glass fiber, the bottleneck moved from the media to the routers that stitch them together. These “core routers” are installed in fortresses with redundant power and access airlocks and waterless fire suppression. It’s the most expensive commercial space there is, and two growth spurts (’96 to ’00 and ’04 to now) have caught the telcos with their pants down, no more switching center capacity. In my county, Internet bandwidth stopped growing when the power company couldn’t deliver any more power to the existing switching/data centers.

    During the first spurt, hundreds of times more fiber optic capacity went into the ground than anybody expected to need. It was all about construction costs. Rights of way and trenching cost so much more per mile than optical fiber, that it costs about the same to lay a hundred fibers in the ground as to lay one. So when you need ten gigabits per second between DC and Baltimore, you install a hundred terabits worth of fiber, just so you won’t have to pull another cable for a while.

    Microprocessors and memory chips get smaller and denser at almost the rate known as “Moore’s law.” That is, processors get twice as fast and memory chips get twice as dense, at roughly the same power consumption, every 18 to 24 months.

    But routers don’t follow Moore’s law. Core routers are the new supercomputers. How fast a router can be made is ultimately limited by electronic packaging technology, not by the limits of photolithography which govern chip making. How much heat can you get out of an electronic box the size of a two drawer filing cabinet. How many pins can a chip package have before it can’t be manufactured reliably. How many bits per second can you send down a pair of PC board traces before the length and the connectors it passes through mess up the signal too much. Those things are always getting better, but the doubling interval is more like five to ten years.

    For the next couple of decades, until routers catch up to fiber, Internet bandwidth will be constrained by how many routers you can stuff into a data center before you have to build a new one. The same problem exists at the “edge” of the network, where cable “head-ends” and DSL Access Multiplexers (DSLAMs) are installed in little concrete coffins (with limited volume and power) in the neighborhoods. Not by how much intercity fiber capacity was installed during the first construction burst.

    Discussions of network economics (e.g. whether bandwidth shaping or “metering” is appropriate) should be informed by that. “Too cheap to meter” didn’t make sense for nuclear power or water, and it doesn’t make sense for the Internet.