Visiting Bletchley Park and seeing step-by-step telephone switching equipment repurposed for computing re-enforced my appreciation for the brilliance of the telecommunication systems we created in the past 150 years. Packet switching was inevitable and IP everything makes sense in today's world, but something was lost in that transition too. I am glad to see that enthusiasts with the will and means are working to preserve some of that history. -Posted from SC2025-
I wanted to learn more about computer hardware in college so I took a class called "Cybernetics" (taught by D. Huffman). I thought we were going to focus on modern stuff, but instead, it was a tour of information theory- which included various mathematical routing concepts (kissing spheres/spherical code, Karnaugh maps). At the time I thought it was boring, but a couple decades later, when working on Clos topologies, it came in handy.
Other interesting notes: the invention of telegraphy and improvements to the underlying electrical systems really helped me understand communications in the 1800s better. And reading/watching Cuckoo's Egg (with the german relay-based telephones) made me appreciate modern digital transistor-based systems.
Even today, when I work on electrical projects in my garage, I am absolutely blown away with how much people could do with limited understanding and technology 100+ years ago compared to what I'm able to cobble together. I know Newton said he saw farther by standing on the shoulders of giants, but some days I feel like I'm standing on a giant, looking backwards and thinking "I am not worthy".
When the Bell System broke up, the old guys wrote a 3-volume technical history of the Bell System.[1] So all that is well documented.
The history of automatic telephony in the Bell System is roughly:
- Step by step switches. 1920s Very reliable in terms of failure, but about 1% misdirected or failed calls. Totally distributed. You could remove any switch, and all it would do is reduce the capacity of the system slightly. Too much hardware per line.
- Panel. 1930s. Scaled better, to large-city central offices. Less hardware per line.
Beginnings of common control. Too complex mechanically. Lots of driveshafts, motors, and clutches.
- Crossbar. 1940s. #5 crossbar was a big dumb switch fabric controlled by a distributed set of microservices, all built from relays. Most elegant architecture. All reliable wire relays, no more motors and gears. If you have to design high-reliability systems, is worth knowing how #5 crossbar worked.
- 1ESS - first US electronic switching. 1960s Two mainframe computers (one spare) controlling a big dumb switch fabric. Worked, but clunky.
- 5ESS - good US electronic switching. Two or more minicomputers controlling a big dumb switch fabric. Very good.
The Museum of Communications in Seattle has step by step, panel, and crossbar systems all working and interconnected.
In the entire history of electromechanical switching in the Bell System, no central office was ever fully down for more than 30 minutes for any reason other than a natural disaster, and in one case a fire in the cable plant. That record has not been maintained in the computer era. It is worth understanding why.
The more I study the 5E I see it as a multicomputer or distributed system. The minicomputers were responsible for OAM and orchestrating the symphony over time, but the communications are happening across the CM which implements the Time/Space/Time fabric and a sea of microcontrollers. I think this clarification is worthwhile because it drives your point about faults in this computer-era and by extension this (micro)services-era home even more -- it's much less mainframe and more distributed system than commonly chronicled, which can be a harder problem especially with the tooling back then.
This is such a stark contrast with how "critical infrastructure" is built now.
A university bought a 5ESS in the 80s, ran it for ~35 years, did two major retrofits, and it just kept going. One physical system, understandable by humans with schematics, that degrades gracefully and can be literally moved with trucks and patience. The whole thing is engineered around physical constraints: -48V, cable management, alarm loops, test circuits, rings. You can walk it, trace it, power it.
Modern telco / "UC" is the opposite: logical sprawl over other people's hardware, opaque vendor blobs, licensing servers, soft switches that are really just big Java apps hoping the underlying cloud doesn't get "optimized" out from under them. When the vendor loses interest, the product dies no matter how many 9s it had last quarter.
The irony is that the 5ESS looks overbuilt until you realize its total lifecycle cost was probably lower than three generations of forklifted VoIP, PBX, and UC migrations, plus all the consulting. Bell Labs treated switching as a capital asset with a 30-year horizon. The industry now treats it as a revenue stream with a 3-year sales quota.
Preserving something like this isn't just nostalgia, it's preserving an existence proof: telephony at planetary scale was solved with understandable, serviceable systems that could run for decades. That design philosophy has mostly vanished from commercial practice, but it's still incredibly relevant if you care about building anything that's supposed to outlive the current funding cycle.
Author and surprised to see this here but happy to see interest in these historical machines.
I will also plug Connections Museum who have an already neat installation in Seattle and are working on their own 5ESS recovery for display at a new site in Colorado https://www.youtube.com/watch?v=3X3-xeuGI5o
> In particular, the machine had an uptime of approximately 35 years including two significant retrofits to newer technology culminating in the current Lucent-dressed 7 R/E configuration...
Pretty impressive. It makes me sad that the trend is to move away from rock-solid stuff towards more and more unreliable and unpredictable stuff (e.g. LLMs that need constant human monitoring because they mess up so much).
a fairly large number - a bigger question is what happens to all the CO buildings once all the copper is turned down.
There is a huge opportunity about 5 years from now for edge datacenters. You have these buildings which have highly reliable power and connectivity, all thats needed is servers which can live in a NEBS environment.
Most of those CO's are in buildings that don't have all that much space in them, were built in the 40's and 50's, and likely aren't suitable for that kind of thing. Cooling would be a big deal.
I have been in ~15 CO's - there is tons of floor space in them, and the only thing telephone switching equipment has done since the 50's is shrink - beyond that, most existing CO buildings had expansions when electronic switching came about, because they couldnt add the new electronic (1/1A/5 ESS) without additional floor space. Cooling is noted by the requirement for NEBS compliant equipment.
Central offices are everywhere, too. You've driven or walked by any number of them, and the most you noticed was a Bell System logo. The downtown COs in big cities are on expensive real estate.
this date obsession is moronic, especially when we are talking about technology over forty years old. Next time you are tempted to spam the date, wait, and see if conversation still happens without your vital input.
There are many articles missing a (2025) addition, so get to work!
Visiting Bletchley Park and seeing step-by-step telephone switching equipment repurposed for computing re-enforced my appreciation for the brilliance of the telecommunication systems we created in the past 150 years. Packet switching was inevitable and IP everything makes sense in today's world, but something was lost in that transition too. I am glad to see that enthusiasts with the will and means are working to preserve some of that history. -Posted from SC2025-
I wanted to learn more about computer hardware in college so I took a class called "Cybernetics" (taught by D. Huffman). I thought we were going to focus on modern stuff, but instead, it was a tour of information theory- which included various mathematical routing concepts (kissing spheres/spherical code, Karnaugh maps). At the time I thought it was boring, but a couple decades later, when working on Clos topologies, it came in handy.
Other interesting notes: the invention of telegraphy and improvements to the underlying electrical systems really helped me understand communications in the 1800s better. And reading/watching Cuckoo's Egg (with the german relay-based telephones) made me appreciate modern digital transistor-based systems.
Even today, when I work on electrical projects in my garage, I am absolutely blown away with how much people could do with limited understanding and technology 100+ years ago compared to what I'm able to cobble together. I know Newton said he saw farther by standing on the shoulders of giants, but some days I feel like I'm standing on a giant, looking backwards and thinking "I am not worthy".
When the Bell System broke up, the old guys wrote a 3-volume technical history of the Bell System.[1] So all that is well documented.
The history of automatic telephony in the Bell System is roughly:
- Step by step switches. 1920s Very reliable in terms of failure, but about 1% misdirected or failed calls. Totally distributed. You could remove any switch, and all it would do is reduce the capacity of the system slightly. Too much hardware per line.
- Panel. 1930s. Scaled better, to large-city central offices. Less hardware per line. Beginnings of common control. Too complex mechanically. Lots of driveshafts, motors, and clutches.
- Crossbar. 1940s. #5 crossbar was a big dumb switch fabric controlled by a distributed set of microservices, all built from relays. Most elegant architecture. All reliable wire relays, no more motors and gears. If you have to design high-reliability systems, is worth knowing how #5 crossbar worked.
- 1ESS - first US electronic switching. 1960s Two mainframe computers (one spare) controlling a big dumb switch fabric. Worked, but clunky.
- 5ESS - good US electronic switching. Two or more minicomputers controlling a big dumb switch fabric. Very good.
The Museum of Communications in Seattle has step by step, panel, and crossbar systems all working and interconnected.
In the entire history of electromechanical switching in the Bell System, no central office was ever fully down for more than 30 minutes for any reason other than a natural disaster, and in one case a fire in the cable plant. That record has not been maintained in the computer era. It is worth understanding why.
[1] https://archive.org/details/bellsystem_HistoryOfEngineeringA...
The more I study the 5E I see it as a multicomputer or distributed system. The minicomputers were responsible for OAM and orchestrating the symphony over time, but the communications are happening across the CM which implements the Time/Space/Time fabric and a sea of microcontrollers. I think this clarification is worthwhile because it drives your point about faults in this computer-era and by extension this (micro)services-era home even more -- it's much less mainframe and more distributed system than commonly chronicled, which can be a harder problem especially with the tooling back then.
> That record has not been maintained in the computer era. It is worth understanding why.
Go on.
This is such a stark contrast with how "critical infrastructure" is built now.
A university bought a 5ESS in the 80s, ran it for ~35 years, did two major retrofits, and it just kept going. One physical system, understandable by humans with schematics, that degrades gracefully and can be literally moved with trucks and patience. The whole thing is engineered around physical constraints: -48V, cable management, alarm loops, test circuits, rings. You can walk it, trace it, power it.
Modern telco / "UC" is the opposite: logical sprawl over other people's hardware, opaque vendor blobs, licensing servers, soft switches that are really just big Java apps hoping the underlying cloud doesn't get "optimized" out from under them. When the vendor loses interest, the product dies no matter how many 9s it had last quarter.
The irony is that the 5ESS looks overbuilt until you realize its total lifecycle cost was probably lower than three generations of forklifted VoIP, PBX, and UC migrations, plus all the consulting. Bell Labs treated switching as a capital asset with a 30-year horizon. The industry now treats it as a revenue stream with a 3-year sales quota.
Preserving something like this isn't just nostalgia, it's preserving an existence proof: telephony at planetary scale was solved with understandable, serviceable systems that could run for decades. That design philosophy has mostly vanished from commercial practice, but it's still incredibly relevant if you care about building anything that's supposed to outlive the current funding cycle.
Author and surprised to see this here but happy to see interest in these historical machines.
I will also plug Connections Museum who have an already neat installation in Seattle and are working on their own 5ESS recovery for display at a new site in Colorado https://www.youtube.com/watch?v=3X3-xeuGI5o
> In particular, the machine had an uptime of approximately 35 years including two significant retrofits to newer technology culminating in the current Lucent-dressed 7 R/E configuration...
Pretty impressive. It makes me sad that the trend is to move away from rock-solid stuff towards more and more unreliable and unpredictable stuff (e.g. LLMs that need constant human monitoring because they mess up so much).
Talk about a gargantuan project.. also awesome to bag such a thing. He's lucky to even have the resources to store^W warehouse it
It's not that much space in some parts of the US where properties are measured in acres.
I wonder how many operating 5ESS are left now.
a fairly large number - a bigger question is what happens to all the CO buildings once all the copper is turned down.
There is a huge opportunity about 5 years from now for edge datacenters. You have these buildings which have highly reliable power and connectivity, all thats needed is servers which can live in a NEBS environment.
COs are already being used for edge datacenters, its just not been talked about much outside the industry.
Relevant https://www.co-buildings.com/
The CO closest to me was turned into condos. A friend was the general contractor. It was by all accounts a nightmare.
Most of those CO's are in buildings that don't have all that much space in them, were built in the 40's and 50's, and likely aren't suitable for that kind of thing. Cooling would be a big deal.
I have been in ~15 CO's - there is tons of floor space in them, and the only thing telephone switching equipment has done since the 50's is shrink - beyond that, most existing CO buildings had expansions when electronic switching came about, because they couldnt add the new electronic (1/1A/5 ESS) without additional floor space. Cooling is noted by the requirement for NEBS compliant equipment.
The older ones have lots of tall windows. It's the newer windowless ones that cannot be easily repurposed, unless you want to build a data center.
Central offices are everywhere, too. You've driven or walked by any number of them, and the most you noticed was a Bell System logo. The downtown COs in big cities are on expensive real estate.
Across the USA? Very likely a few thousand.
(2024), but still a good read!
this date obsession is moronic, especially when we are talking about technology over forty years old. Next time you are tempted to spam the date, wait, and see if conversation still happens without your vital input.
There are many articles missing a (2025) addition, so get to work!