Man is Dead

Alastair Meeks
9 min readDec 5, 2022

--

God is dead, Nietzche told us in 1882. The ideas of the enlightenment, he thought, left God no space to breathe. He correctly argued that it was hopeless for the religious to take refuge in the things unexplained by science. First, those things would inevitably dwindle in number over time. Second, a God of the gaps left increasing numbers of things outside God’s remit. The idea was doomed to fail.

This week, we learned, if we had not already, that man is dead. OpenAI released access to ChatGPT, a chatbot powered by a large language model AI. You can ask it questions in informal English on a wide range of subjects. It will produce answers in a second or two.

This has been an overnight sensation. It can write code and find errors in code. Ballerinas might want to rethink the idea of their next career being in cyber. Lecturers are fretting about how undergraduates might use such resources for plagiarism. A better question would be why we might want humans to study subjects at degree level that computers can competently precis for us.

Its abilities are very varied. It could tell me about the decline of the Fluxus art movement. It could explain the disappearance of the river Walbrook (this answer is not quite right: the Walbrook disappeared in the middle ages when the monks of Charterhouse diverted waters from its source to their monastery and it was culverted by the fifteenth century).

I asked it to produce a poem on Nietszche and the idea that God is dead. The poem at the top is its own creation. Now this is not great art, but it is of at least the calibre that you find in a greetings card. I expect Moonpig and Funky Pigeon will be paying close attention. Here’s what I got when I asked for a tailored greetings card poem.

Sure, it’s nowhere near perfect yet. You don’t have to try hard to find the limits of ChatGPT’s knowledge. Still, this is far beyond technical dominance of a very specific problem like Chess, Go or Diplomacy. This is a program that can answer real world problems on a wide variety of subjects in useful ways that previously only humans could. And it is only going to get more powerful and able at doing this.

The waters are washing ever higher. We got used to computers being a place where we store information. Now we are seeing computers explain and interrogate the information that they store. The next stage is inevitably that they will apply the explanations that they provide to real world problems without the immediate need for human oversight.

Let’s put this in context of a specific office career, one that’s not particularly computer-oriented. By a happy chance, I’m very familiar with just such a career: that of lawyer. Belying their image as antediluvians, lawyers were using Lexis (part of the ancestry of the internet) to find cases long before the internet was a thing: I was able to use it at university as early as 1985, though it seemed exorbitantly expensive to a poor student. When I started work in 1990, I did have a computer on my desk and the lawyers at that firm had a system of internal email. In practice, the computer was used primarily for word processing at that time.

At my next firm, I didn’t get a computer on my desk until August 1997 (after I had made partner). It was geared up for external email but my first proper email from a client didn’t come for a further 9 months. Of course, the avalanche started to gather pace soon after.

We quickly, however, saw the advantages of storing information on computer. Up till then, every lawyer had their own bank of knowhow which stretched to numerous lever-arch files on shelves in their rooms. These were organised with varying levels of efficiency and completeness. They also took up a lot of space, which was not a trivial matter when one considers office rental prices in the prime city centre locations that lawyers occupy.

Long before specialist information providers had created a market for such information, lawyers had uploaded government guidance, precedent letters, letters of advice and so on into centralised databases. At one drinks function for lawyers in about 2003, two assistants from a rival firm came over to me to effuse about a letter I’d co-written the previous year on a highly contentious and highly public matter where their firm had been on the other side. “It’s very good”, they said. “We use it all the time.”

Individual lawyers only slowly abandoned their own knowhow systems, being distrustful of centralised authority, but everyone agreed it was a huge improvement, getting the best of everyone’s knowhow in one place in a much more easily searchable format. Far from reducing lawyers’ workloads, it streamlined and simplified an important part of it so that lawyers’ advice quantitatively improved. Since putting together that advice took less time, in most areas of law work on projects that previously had not been cost effective to carry out could now be undertaken.

This was not, however, true of all areas of law. Employment law as a practice area now has far fewer lawyers than it did 20 years ago. It turned out that it was cheaper to manage most tribunal cases administratively than to keep lawyers involved.

In due course that information function was outsourced, especially in highly technical areas. It was cheaper for lawyers to pay specialists to do the information gathering and to professionalise the taxonomies than to do it in-house. The value that lawyers add nowadays is not in access to the information but in understanding what to do with that information (or, quite often, the lack of information): tactics, strategy and judgement informed by industry understanding. When I retired in 2020, I couldn’t have told you where the office library was located. All the knowhow I needed was available at my fingertips or in the heads of colleagues in my team. I charged a large part of my fees advising on two questions: “what is everyone else doing?”; and “what would you do if you were me?”

Separately, computing power massively simplified project management for lawyers. This is an area which does not come naturally to many lawyers, but legal projects need management just as much as any other projects.

AI is going to be able to take over much of the project management function (if you work in project management, start thinking about how you are going to be adding to the process rather than establishing the process. My suggestion is majoring on projects with high status participants: such participants are impervious to threats and so neat timelines and dependencies will only be met if human skills of persuasion are used rather than relying computer-generated deadlines being met). And it is easy to see how ChatGPT, suitably primed with specialist legal material, could already draw up high level strategies for generic circumstances. As it got more powerful, it could draw up ever more tailored strategies. This seems pretty close now. The job of lawyers would slowly reduce in many areas to that of data inputter and peer reviewer. As AIs got more reliable, some clients might well seek to dispense with that, provided the indemnity insurance in place for those offering the AI services was satisfactory.

There are areas of law that would be opened up by this technology. To give an example, social security law is currently above the tree line for lawyers. It is monstrously complicated, but it has few practitioners, who are poorly paid, because individual cases just aren’t worth enough to repay the study that they would require. Imagine, however, a world where individuals could pay modest sums to get accurate synopses from AIs of the law relating to their circumstances and have AIs prepare template claim forms, case management tools and skeleton arguments so that they could effectively challenge poor administrative decisions taken by overworked bureaucrats out of their depth with targets to meet. It would be transformative for justice. It is also something very plausible from this point.

In most areas of law, however, the trend will be to require fewer skilled lawyers to provide work that will be of greater quality and consistency than at present. There is no obvious reason why those lawyers are going to continue to be required for other purposes. So the fate of employment lawyers in the last generation may well foreshadow the fate of many other lawyers in the coming decades.

This challenge looms not just for lawyers but for accountants, actuaries, management consultants, project managers and pretty well any other office job requiring application of knowledge to circumstances. If you want to know the likely scope, look at any area where Software As A Service is provided. Patio11’s law applies. The business world as we know it is about to be dynamited.

Computers as a place to store information opened up powerful new roles for humans, curating and applying that newly-accessible information. Computers explaining that information and interrogating it will take those skilled curator roles away from humans, requiring no more of humans than supermarket shelf-stackers. Perhaps in the short to medium term there will be more jobs for humans applying the explanations the AIs have provided. However, as AIs get more and more powerful, those roles too will disappear. We are becoming Man of the gaps. And that idea is doomed to fail.

So what is going to be left? Early versions of the Daleks were defeated by stairs. In the short term, AI similarly lacks mobility in the physical world. Those looking for employment should look for jobs that require physical activity of some sort.

We are on the cusp of a great inversion. For generations, we have in general paid jobs requiring mental activity more highly than jobs requiring physical activity. Muscle is readily available, so why pay a lot for it? Nurses, PTs, farm labourers and soldiers are currently not well-paid relative to most desk jockeys.

In future, inter-personal skills, brawn and footwork will become scarcer and thus more valuable than brainwork. We will become a draught animal.

If you are sat in an office and you’re not ready to stop working in the medium term, what can you do to adapt? AIs are soon going to be able to outperform humans with any existing information. One way to get an advantage will be to generate new information that the AI does not have access to.

The answer is to make your job dependent on inter-personal skills, brawn or footwork. If you’re a lawyer, you might make sure that your job involves directly meeting people (perhaps interviewing witnesses) in a way that AI can’t. Disputes of fact are going to be less amenable to AI intervention than disputes of interpretation.

If you’re a journalist and you produce listicles, ChatGPT can do that already, so you ought to be panicking right now. If you’re a journalist and you analyse official statistics or official records, AIs are going to be able to do that better than you soon enough. AI is going to struggle for a long time to conduct interviews and assess truthfulness, pick up on cadences and hesitations in speech. It’s going to struggle even more to persuade those initially reluctant to talk with them to change their minds.

This, however, is a strategy of a Man of the gaps, as you have probably already realised. The Daleks can climb stairs now and at some point AIs will harness robotic power to be able to make progress into these areas of current human advantage.

And then what? There may be some human activities that AI will not venture into — sports, for example, though even there I expect the effort will be made. Man has sought to conquer and occupy every square inch of land on earth. AI’s aims will be just as all-embracing.

Are there things that humans can’t do that AI will never be able to do? I don’t know. We need, however, to start planning on the basis that many current human occupations are going to be obsolete in fairly short time periods and that many more will become obsolete on a rolling basis thereafter. We are going to have a hell of lot of unemployed and underemployed people. What are we going to do to keep them fed, clothed and occupied? And what is going to be the point of us as a species?

--

--

Responses (1)