In my experience, how we access the core logic of the software we write has changed a fair bit over the past 25 years.

But the core activity behind this has barely changed.

Early 90s

Back in the 90s, Internet services were pretty much bespoke, every service had their own protocol, but most “text-based” services were pretty similar.

Finger

I remember as a college project, implementing a client for the Finger protocol.

From the introduction to RFC 742:

To use via the network:

ICP to socket 117 (octal, 79. decimal) and establish two 8-bit connections.

Send a single “command line”, ending with .

Receive information which will vary depending on the above line and the particular system. The server closes its connections as soon as this output is finished.

The command line:

Systems may differ in their interpretations of this line. However, the basic scheme is straightforward: if the line is null (i.e. just a is sent) then the server should return a "default" report which lists all people using the system at that moment. If on the other hand a user name is specified (e.g. FOO) then the response should concern only that particular user, whether logged in or not.

Simply, you connect (unsecured) to port 79, send a request, which is possibly a username, and Finger sends you back a response about that user, or information about all the users on that system, and the client is disconnected.

This is a really simple protocol, and many other early-Internet era services were built around similar exchanges.

HTTP

HTTP (The Web) was built around a very similar exchange, to quote from the original specification for HTTP.

Connection The client makes a TCP-IP connection to the host using the domain name or IP number , and the port number given in the address. If the port number is not specified, 80 is always assumed for HTTP. The server accepts the connection.

Note: HTTP currently runs over TCP, but could run over any connection-oriented service. The interpretation of the protocol below in the case of a sequenced packet service (such as DECnet(TM) or ISO TP4) is that that the request should be one TPDU, but the response may be many.

Request The client sends a document request consisting of a line of ASCII characters terminated by a CR LF (carriage return, line feed) pair. A well-behaved server will not require the carriage return character. This request consists of the word “GET”, a space, the document address , omitting the “http:, host and port parts when they are the coordinates just used to make the connection. (If a gateway is being used, then a full document address may be given specifying a different naming scheme). The document address will consist of a single word (ie no spaces). If any further words are found on the request line, they MUST either be ignored, or else treated according to the full HTTP spec . The search functionality of the protocol lies in the ability of the addressing syntax to describe a search on a named index . A search should only be requested by a client when the index document itself has been descibed as an index using the ISINDEX tag .

Response The response to a simple GET request is a message in hypertext mark-up language ( HTML ). This is a byte stream of ASCII characters. Lines shall be delimited by an optional carriage return followed by a mandatory line feed chararcter. The client should not assume that the carriage return will be present. Lines may be of any length. Well-behaved servers should retrict line length to 80 characters excluding the CR LF pair. The format of the message is HTML - that is, a trimmed SGML document. Note that this format allows for menus and hit lists to be returned as hypertext. It also allows for plain ASCII text to be returned following the PLAINTEXT tag . The message is terminated by the closing of the connection by the server. Well-behaved clients will read the entire document as fast as possible. The client shall not wait for user action (output paging for example) before reading the whole of the document. The server may impose a timeout of the order of 15 seconds on inactivity. Error responses are supplied in human readable text in HTML syntax. There is no way to distinguish an error response from a satisfactory response except for the content of the text.

Disconnection The TCP-IP connection is broken by the server when the whole document has been transferred.

Note the same “connect, send a request, send a response, disconnect” sequence.

Also, like the earliest protocols, HTTP was essentially a “text” protocol, the client sent a human-readable request, and received a human-readable response.

Late 90s - distributed object protocols

In the late 90s, protocols like CORBA and DCOM were mooted, these were request/response oriented, but significantly, they were binary protocols, networks weren’t as fast as they are nowadays, and compressed binary protocols could make a significant performance improvement.

Both were designed around the idea of exposing objects on the Internet, with requests making method calls on an object.

This was a common idea at the time, Bobo was an early Python library that implemented the same ideas on-top of HTTP, and this eventually grew into Zope, which I spent a significant part of the 2000s working with.

And Java, had RMI, upon which things like EJB were built.

RMI was the prevailing method of intra-service communication in the Java World right into the mid-2000s.

Early 2000s - distributed objects on the web

The SOAP protocol, and it’s predecessor XML-RPC came about in 1998 and 1999 respectively, in 2000 and 2001 I was involved in building a service that exposed a SOAP interface, in those days, it was fairly lightweight, and we exposed a facade that made our service available to a Windows desktop client.

Both of these were built on-top of HTTP, in the classic Request-Response-Disconnect style that dates back to the early 1970s, and involve sending blobs of XML back and forward.

Over time, SOAP and JAX-WS came to dominate the way Enterprise Services were exposed on the internet, the tooling around these made it easy to expose business objects on the internet, and consume them in clients, and many developers will be familiar with WSDL and exposing objects inappropriately on the internet.

On the other side of Enterprise Services, were architectures like REST and more bespoke URL architectures, but these are still simple Request-Response-Disconnect protocols

Over this time XML, was by-and-large replaced by JSON, and we saw the huge rise in “AJAX” as revolutionary services like Google’s GMail and Maps were released, these services required large numbers of HTTP requests, the “Request-Response-Disconnect” became more and more expensive, and so persistent streaming connections became popular, things like Websockets provide a way for clients and servers to utilise a single connection to make multiple requests.

All this time, services were moving from Desktop-Server applications, to purely Web-based services, things like DCOM and SOAP are now fading memories.

Modern Era

In recent years, streaming protocols have become more common, as the C10K problem has been solved, servers are now designed to handle far more connections than ever before.

Modern languages and libraries are designed to make multiplexing many connections to services easier and safer, protocols like HTTP/2 and gRPC are designed around persistent connections, streams of data being transmitted down the same socket connection, much like Websockets.

Request-Response-Disconnect has changed to reflect this and has become more like “Request-Response-Response, Request-Response…Disconnect”

The Rise of HTTP

Nowadays, HTTP dominates internet protocols, mostly because it’s fairly cheap to build on-top of.

I don’t doubt that if Finger were reimplemented nowadays, you’d be fetching “https://finger.example.com/users/userid” and getting a response for a single user.

Or “https://finger.example.com/users” to get all users on the system.

Summary

But, in all this change, a few things have remained constant.

  • Request-Response

    Most communication is still Request-Response, Websockets shifted some aspects of client communication to a more Asynchronous model, but consumers hit websites, and expect responses.

  • Databases

    Fundamentally, most requests are for information from databases, of course, there’s often a bit of processing of that data before you return it to the client, but most services are really “Request-Lookup in Database-Response”

Databases have changed enormously in the past 20 years, Relational databases still dominate the market, but more specialised uses like Graph databases and Document databases under the NoSQL banner have become more prevalent.

The project I worked on in 2000 stuffed parsed documents into an Oracle database, nowadays that’d almost certainly be a document-oriented store.

Logic hasn’t changed much, we still calculate things, and the fundamental algorithms for that haven’t really altered, but how we query for the data, how we receive requests, and how we respond to those requests has all changed dramatically in this period, we’re more likely than ever to be receiving and responding to multiple channels, while “Request-Response” is not dead, the “how” has changed dramatically, and continues to do so

Good software architecture should make this easy, so that if next year, HTTP/3 comes out, and you want to expose your brand new Unicorn service to early-adopters, it’s as simple as hooking it up to the same logic that services your HTTP/2, Websocket, gRPC and legacy HTTP requests.