New Tech Won’t Save Your Code

Lucas McGregor
7 min readMar 22, 2021

History Repeating

Anyone who has survived the grind house of technology for enough years has seen the same thing reimplemented and rebranded time and time again.

The cycle is always the same. The current leading tech’s adoption has mushroomed. It is ported to multiple environments and is used in wider and wider scenarios. As the solution expands to solve more problems for more teams, it become more complex. Its success is also its scope creep.

While this is happening, new technology is faster and cheaper. Some hard to solve problems are now cheaply solved by something else. Standing on these new platforms, newer and more elegant solutions arise.

Make no mistake, whichever ones win, will also be widely adopted and eventually mushroom into a general purpose technology with all the complexity and bloat of the previous winners. In the meantime, people will act like it is a paradigm shift.

Old Habits

But there were containers and virtualization before Docker and Kubernetes. There was Service Oriented Architecture before MicroServices. There was CORBA, then SOAP, then REST, before gRPC. C compiled on all architectures before Java ran everywhere. We have bounced back and forth between thread pools and event loops for about 50 years now.

As an elder statesman of technology, I am bemused by the historical ignorance of each new wave. They gloriously reimplement old paradigms, uninformed of the lessons from previous iterations, ignorant that they will face the same trade-offs and will walk down the same paths.

To paraphrase, those who don’t read the specs of the past are doomed to reimplement them.

Constraints Are Constant, Technology is Temporary

Two things drive all technology paradigms: cost and complexity. When computers are expensive, we could afford a few mainframes and many terminals. When computers became cheap, everyone had a desktop and mainframes made no sense. As networks became cheap, we could do things that were too big for desktops, so we created clusters. Now we are back to many cheap mobile terminals offloading their work to centralized clusters. The patterns for communication, coordination, error handling, etc are all the same.

Through all of these cycles, three major programming paradigms keep resurfacing to be the silver bullet of the day. Naive developers evangelize their new religion and denounce the previous winners as heretics and behind the times. The novice only has time for one tool and will make it solve all problems.

When Resources are Limited

All languages are rooted in the basic constructs of data and instructions. The classic von Neumann machine is a general purpose computer that has an area to store data and an area to store instructions. An instruction either fetches, stores, or operates on the data. At their heart, all programming languages have to map down to this reality.

If you started programming at an early age, you probably started with BASIC. It is the classic language to teach new programmers how to think in steps and logic flows. BASIC is a procedural language. The Procedural paradigm is where general purpose computing got started, and for many years, languages like COBOL, C, and Fortran were the languages of choice to map real world problems to computers.

Procedural languages map efficiently to the assembly instructions and machine code. When hardware was expensive and CPU and memory limited, efficiency was the biggest lever a developer had to make their code better than the competition’s.

Most people don’t think about procedural languages anymore, they don’t scale to complex problems well. They have been mostly relegated to device drivers and embedded systems that still deal with hardware and efficiency limitations. But the fact is, once you get to a small enough part of your code, it probably is procedural. Your method or function is probably as complicated as a 1966 Fortran program, and probably just as procedural; a listing of specific steps in a structured flow.

Cheap Computers but Limited Network

As CPU and memory grew cheaper, expensive problems became cheaper to do on computers. Business requirements and logic swelled. We could afford to use computers in ways we never could before. New interactive UIs brought a whole new set of complexity. Memory jumped into the megabytes, and users expected to manage complex relationships and interactions in their data sets.

In the 90s, on cheap and powerful desktops, it was all about Object Oriented Programming (OOP). This much maligned paradigm might be as cool as dad jeans, but is still the workhorse of many systems. Rooted in the early days of computer science, it blew up with C++ and Java.

Like procedural languages, OOP is an imperative paradigm. It focuses on giving instructions to the computer on how to do its job. You can make a straight-forward map between the OOP or procedural languages and the instructions and data feed to the computer.

Unlike procedural languages, OOP focuses a lot of time on creating abstractions between computational units. A large procedural program would be one giant logic flow, resulting in a tangled mess of spaghetti code, OOP acted like Tupperware, keeping the different parts of the program separated and neat.

OOPs enabled a revolution in software development. Now very complicated problems could be mapped into feasible software. Well designed systems could scale and last for years. Teams could also expand, and even multiple teams could safely work in the same code base. But OOPs paradigm is a leaky metaphor on top of the data and instructions below. If every developer involved didn’t understand how their OOP language actually dealt with data and instructions below, tricky and hard to hunt down bugs would creep in.

OOPs biggest trade-off was that it enabled complicated systems to be created, but at a steep learning curve to use the tools and an unforgiving environment for rookie mistakes. OOP allowed software and teams to scale in complexity; but you cannot remove complexity from the problem, just shuffle it around. The fast hacking of procedural code gave way to longer architecture and design phases before coding would event start.

Moving to Object Oriented Programming was like going from build a shed in your garden to working with city planners, so matter if you were building a shed or a skyscraper.

Fast Networks, Distributed State

As networks became cheap, we shifted back to many computers working together. Cloud services still present themselves as mainframes, offering shared resources. But in the days of mainframes, the clients were all dumb terminals with no local copy of state. Cheap computers plus cheap network gives us an Internet of smart terminals all with their own local state and CPU.

Object Oriented systems tried to keep up with exotic solutions such as smart network caches and distributed transaction managers; but they topple at sufficient scale where is becomes impractical to try to have a true object that is synched across the network across multiple devices.

Functional programming, a declarative paradigm better fills this gap. Functional programming was the basis of Alonzo Church’s research into universal computers and was one of the first programming paradigms dating back to the 1930s.

Before OOP exploded in the 90s, Erlang, a functional language, took over the telecom industry in the 1980s. Unlike the imperative models that focus on instructions, where state is the byproduct; declarative models focus on declaring a state and letting the system figure out the steps to achieve it.

Functional programming shifted from commanding virtual objects to sending messages to functions that operated on the message. Unlike arguments passed to objects, functional messages are immutable. Like algebra, when a function works on X, it never changes the value of X but might make a local copy, an X’. With functional languages, the messages are the state and the functions are stateless machines. When these messages are distributed over the network, functional systems are able to easily scale and tolerate failure. It is a natural fit for the telephone industry, which deals with massively distributed environments that require the ability to quickly scale up with and down with traffic surges and resilience in spite of a complex web of commodity components that have failure rates that mainframes could never tolerate.

The trade-off for simple distribution is complexity in managing logic. Functional and procedural language both bloom into trees of nested calls that quickly become unnamable as their complexity increases. Procedural languages have the added side-effect of mutable arguments, which means you have to trace each touch point along the path. Functional languages, especially when using a message bus, means you only know all the actors at runtime and thus are not always possible to debug or even predict!

Trade-Offs

Developers who don’t know the history and motivations for the various paradigms don’t understand how each system is an answer to different constraints and opportunities. The don’t recognize the trade-offs and patterns that keep repeating across implementations. Nor do they know how to use them.

This is especially important for full stack Internet developers, where the typical stack covers all three areas:

  • procedural code for small units that require high performance and are close to specific computational and more memory resources
  • object oriented systems to managing state and complex business logic
  • functional services for distributed processing, leveraging cheap and fast network for scalability and fault tolerance.

To be an effective full stack developer, you have to be able to understand and switch lenses. You may be working with objects in your mobile app or the code within your micro service that is managing the state and business rules. Then as your connect your service to another, you switch into a functional mindset shift from objects to messages.

You have to be able to think about memory and references within the boundaries of a system, but also understand that this monolithic chunk could be distributed later and now answer to functional paradigm trade offs.

Instead of just learning the latest technology, look at previous solutions for managing these trade-offs. Many of the lessons are obvious and still applicable. By understanding previous solutions that already expanded into complex, general purpose technologies; you can make informed trade-offs. What parts are relevant to you, and what is just overly complex. More importantly, what unforeseen trade-offs await.

A little technical archeology can go a long way in understanding what makes certain solutions stand the test of time, and others destined to either implode under their own weight or simply fade away. No idea, no mater how innovative, exists outside of its own history.

--

--

Lucas McGregor

SVP of Engineer @ StepStone. Code monkey, product person, policy wonk, armchair philosopher, and all around tinkerer.