What comes next in the following sequence: 650, 400, 300, …? More on this in a minute.

I decided to make a little table with the help of the Oxford English Dictionary to summarise the usage of most of the words Eric Hobsbawm listed at the beginning of his Age of Revolution, 1789-1848. I have highlighted all of the meanings not in use until at least 1800 below:

IndustrySince 1500 it has had a meaning of productive work, trade, or manufacture. In later use esp.: manufacturing and production carried out on a commercial basis, typically organized on a large scale and requiring the investment of capital.
Since 1801–Manufacturing or production, and those involved in it, regarded as an entity, esp. owners or managers of companies, factories, etc., regarded as influential figures, esp. with regard to investment in an economy.
IndustrialistSince 1839 to denote a person engaged in or connected with industry
FactorySince 1618 A location or premises in which a product is manufactured; esp. a building or range of buildings with plant for the manufacture or assembly of goods or for the processing of substances or materials
Middle Class Since 1654 A class of society or social grouping between an upper and a lower (or working) class, usually regarded as including professional and business people and their families; (in singular and plural) the members of such a class. However only since 1836 Of, relating to, or designating the middle class. And only since 1846 Characteristic of the middle class; having the characteristics of the middle classes. Esp. in middle-class morality. Frequently derogatory
Working ClassSince 1757 A class of society or social grouping consisting of people who are employed for wages, esp. in unskilled or semi-skilled manual or industrial work, and their families, and which is typically considered the lowest class in terms of economic level and social status; (with the, in singular and plural) the members of such a class. However only since 1833 Of, belonging to, or characteristic of the working class.
CapitalistSince 1774 A person who possesses capital assets esp. one who invests these esp. for profit in financial and business enterprises. Also: an advocate of capitalism or of an economic system based on capitalism.
CapitalismSince 1833 The practices or principles of capitalists; the dominance of capitalists in financial and business enterprises; esp. an economic system based on wage labour in which the means of production is controlled by private or corporate interests for the purpose of profit, with prices determined largely by competition in a free market.
SocialismSince 1833 Frequently with capital initial. A theory or system of social organization based on state or collective ownership and regulation of the means of production, distribution, and exchange for the common benefit of all members of society; advocacy or practice of such a system, esp. as a political movement. Now also: any of various systems of liberal social democracy which retain a commitment to social justice and social reform, or feature some degree of state intervention in the running of the economy.
MarxismSince 1883 The ideas, theories, and methods of Karl Marx; esp. the political and economic theories propounded by Marx together with Friedrich Engels, later developed by their followers to form the basis for the theory and practice of communism.
AristocracySince 1561 it has had a meaning of In the literal sense of the Greek: The government of a state by its best citizens. Since 1651 The class to which such a ruling body belongs, a patrician order; the collective body of those who form a privileged class with regard to the government of their country; the nobles. The term is popularly extended to include all those who by birth or fortune occupy a position distinctly above the rest of the community, and is also used figuratively of those who are superior in other respects.
RailwaySince 1681 A roadway laid with rails (originally of wood, later also of iron or steel) along which the wheels of wagons or trucks may run, in order to facilitate the transport of heavy loads, originally and chiefly from a colliery; a wagonway. Since 1822 (despite the first railway not being opened until 1825) A line or track typically consisting of a pair of iron or steel rails, along which carriages, wagons, or trucks conveying passengers or goods are moved by a locomotive engine or other powered unit. Also: a network or organization of such lines; a company which owns, manages, or operates such a line or network; this form of transportation.
NationalitySince 1763 National origin or identity; (Law) the status of being a citizen or subject of a particular state; the legal relationship between a citizen and his or her state, usually involving obligations of support and protection; a particular national identity. Also: the legal relationship between a ship, aircraft, company, etc., and the state in which it is registered. Since 1832 group of persons belonging to a particular nation; a nation; an ethnic or racial group.
ScientistSince 1834 A person who conducts scientific research or investigation; an expert in or student of science, esp. one or more of the natural or physical sciences.
EngineerSince 1500 Originally: a person who designs or builds engines or other machinery. Subsequently more generally: a person who uses specialized knowledge or skills to design, build, and maintain complicated equipment, systems, processes, etc.; an expert in or student of engineering. Frequently with distinguishing word. From the later 18th cent. onwards mainly with reference to mechanical, chemical, electrical, and similar processes; later (chiefly with distinguishing word) also with reference to biological or technological systems. Since 1606 A person whose profession is the designing and constructing of works of public utility, such as bridges, roads, canals, railways, harbours, drainage works, etc.
ProletariatSince 1847 Wage earners collectively, esp. those who have no capital and who depend for subsistence on their daily labour; the working classes. Esp. with reference to Marxist theory, in which the proletariat are seen as engaged in permanent class struggle with the bourgeoisie, or with those who own the means of production.
Crisis Since 1588 Originally: a state of affairs in which a decisive change for better or worse is imminent; a turning point. Now usually: a situation or period characterized by intense difficulty, insecurity, or danger, either in the public sphere or in one’s personal life; a sudden emergency situation. Also as a mass noun, esp. in in crisis.
UtilitarianSince 1802 Of philosophy, principles, etc.: Consisting in or based upon utility; spec. that regards the greatest good or happiness of the greatest number as the chief consideration or rule of morality. Since 1830 Of or pertaining to utility; relating to mere material interests. Since 1847 In quasi-depreciative use: Having regard to mere utility rather than beauty, amenity, etc.
StatisticsSince 1839 The systematic collection and arrangement of numerical facts or data of any kind; (also) the branch of science or mathematics concerned with the analysis and interpretation of numerical data and appropriate ways of gathering such data.
SociologySince 1842 The study of the development, structure, and functioning of human society. Since 1865 The sociological aspects of a subject or discipline; a particular sociological system.
JournalismSince 1833 The occupation or profession of a journalist; journalistic writing; newspapers and periodicals collectively.
Ideology By 1796 (a) The study of ideas; that branch of philosophy or psychology which deals with the origin and nature of ideas. (b) spec. The system introduced by the French philosopher Étienne Condillac (1715–80), according to which all ideas are derived from sensations. By 1896 A systematic scheme of ideas, usually relating to politics, economics, or society and forming the basis of action or policy; a set of beliefs governing conduct. Also: the forming or holding of such a scheme of ideas.
Strike Since 1810 A concerted cessation of work on the part of a body of workers, for the purpose of obtaining some concession from the employer or employers. Formerly sometimes more explicitly strike of work. Cf. strike v. IV.24, IV.24b Phrase, on strike, also (U.S.) on a strike. Frequently with preceding qualifying word, as general strike, outlaw strike, selective strike, sit-down strike, stay-away strike, stay-down strike, stay-in strike, sympathetic strike, wildcat strike: see under the first elements. Also figurative. Since 1889 A concerted abstention from a particular economic, physical, or social activity on the part of persons who are attempting to obtain a concession from an authority or to register a protest; esp. in hunger strike, rent strike
PauperismSince 1792 The condition of being a pauper; extreme poverty; = pauperdom n. Since 1807 The existence of a pauper class; poverty, with dependence on public relief or charity, as an established fact or phenomenon in a society. Now chiefly historical.
Source: https://www.oed.com/dictionary/ (subscription needed for full access)

Now try and imagine having a conversation about politics, economics, your job, the news, or even what you watched last night on TV without using any of these words. Try and imagine any of our politicians getting through an interview of any length without resorting to industry, ideology, statistics, nationality or crisis. Let’s call us now Lemmy (Late Modern) and us then Emily (Early Modern):

Lemmy: We need to send back people who arrive here illegally if they are a different nationality.

Emily: What’s a nationality?

Lemmy: Failing to do so is based on woke ideology.

Emily: What’s an ideology? And what has my state of wakefulness got to do with it?

Lemmy: This is a crisis.

Emily: Is that a good crisis or a bad crisis?

Lemmy: All crises are bad.

You get the idea.

The 1700s are divided from us by a political and economic language which would have been almost unrecognisable to the people who lived then.

However the other thing that occurs to me is that 1800 is quite a while ago now. The approximate date boundaries of the various iterations of English are often presented as follows:

Source: https://www.myenglishlanguage.com/history-of-english/

Back to my sequence. We are up to 225 years now since the last major shift. So why do we still base our political and economic discussions on the language of the early 1800s?

Well perhaps only our politicians and the people who volunteer to be in the Question Time audience do. As Carlo Iacono puts it brilliantly here in response to James Marriott’s essay The dawn of the post-literate society:

The future Marriott fears, where we’re all reduced to emotional, reactive creatures of the feed, is certainly one possibility. But it’s not inevitable. The teenagers I see who code while listening to philosophy podcasts, who annotate videos with critical commentary, who create elaborate multimedia presentations synthesising dozens of sources: they’re not the degraded shadows of their literate ancestors. They’re developing new forms of intellectual engagement that we’re only beginning to understand.

In the spirit of the slow singularity, perhaps the transition is already happening, but will only be recorded on a timeline when it is more established. Take podcasts, for instance. Ofcom’s latest Media Nations report from 2024 says this:

After a dip in the past couple of years, it seems that 15-24-year-olds are getting back into podcasts, while 35-44s are turning away. Podcasts are still most popular among adults aged 25-34, with weekly reach increasing to 27.9% in the last year. The over-54s remain less likely than average to listen to podcasts, but in contrast to the fluctuation in younger age groups, reach has been steadily increasing among over-54s in the past five years.

It may be that what will be the important language of the next century is already developing out of sight of most politicians and political commentators.

And the people developing it are likely to have just as hard a time holding a conversation with our current rulers as Lemmy is with Emily.

Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”

I watched The War Game this week, as it had suddenly turned up on iPlayer and I had not seen it before. It was the infamous film from 1966 on the horrors of a nuclear war in the UK that was not televised until 1985. It has been much lauded as both necessarily horrifying and important over the years, but what struck me watching it was how much it looked back to the period of rationing (which had only ended in the UK 12 years earlier) and general war-time organisation from the Second World War. It would be a very different film if made now, probably drawing on our recent experiences of the pandemic (when of course we did dig huge pits for mass burials of the dead and set up vast Nightingale hospitals as potential field hospitals, before the vaccines emerged earlier than expected).

But what about the threat of nuclear war which still preoccupied us so much in the 1980s but which seems to have become much less of a focus more recently? With the New START treaty, which limits the number of strategic nuclear warheads that the United States and Russia can deploy, and the deployment of land and submarine-based missiles and bombers to deliver them, due to expire on 5 February 5, negotiations between Russia and the United States finally appear to be in progress. However China has today confirmed that it does not want to participate in these.

In Mark Lynas’ recent book Six Minutes to Winter, he points to the Barret, Baum and Hostetler paper from 2013 which estimated the probability of inadvertent nuclear war in any year to be around 1%. This is twice the probability of insolvency we think acceptable for our insurance companies under Solvency II and would mean, if accurate, that the probability of avoiding nuclear war by 2100 was 0.99 raised to the power of 75 (the number of years until 2100), or 47%, ie less than a fifty-fifty chance.

That doesn’t seem like good enough odds to me. As Lynas says:

We cannot continue to run the daily risk of nuclear war, because sooner or later one will happen. We expend enormous quantities of effort on climate change, a threat that can endanger human civilisation in decades, but ignore one that can already destroy the world in minutes. Either by accident or by intent, the day of Armageddon will surely dawn. It’s either us or them: our civilisation or the nukes. We cannot both survive indefinitely.

The Treaty on the Prohibition of Nuclear Weapons (TPNW) was adopted at the UN in 2017 and came into force in 2021. In Article 1 of the Treaty, each state party to it undertakes never to develop, test, produce, possess, transfer, use or threaten to use nuclear weapons under any circumstances. 94 countries have signed the TPNW to date, with 73 full parties to it.

The House of Commons library entry on TPNW poses a challenge:

It is the first multilateral, legally binding, instrument for nuclear disarmament to have been negotiated in 20 years. However, the nuclear weapon states have not signed and ratified the new treaty, and as such, are not legally bound by its provisions. The lack of engagement by the nuclear weapon states subsequently raises the question of what this treaty can realistically achieve.

It then goes on to state the position of the UK Government:

The British Government did not participate in the UN talks and will not sign and ratify the new treaty. It believes that the best way to achieve the goal of global nuclear disarmament is through gradual multilateral disarmament, negotiated using a step-by-step approach and within existing international frameworks, specifically the Nuclear Non-Proliferation Treaty. The Government has also made clear that it will not accept any argument that this treaty constitutes a development of customary international law binding on the UK or other non-parties.

There are 9 nuclear states in the world: China, France, India, North Korea, Pakistan, Russia, Israel, the UK and the United States. Israel recently conducted a 12 day war with Iran to stop it becoming the 10th. Many argue that Russia would never have invaded Ukraine had it kept its nuclear weapons (although it seems unlikely that they would have ever been able to use them as a deterrent for a number of reasons). So the claims of these nuclear states that they are essential to their security are real.

But is the risk that continued maintenance of a nuclear arsenal poses worth it for this additional security? For the security only operates at the deterrence level. Once the first bomb lands we are no more secure than anyone else.

Which makes it all the more concerning when Donald Trump starts saying things like this (in response to a veiled threat by the Russian Foreign Minister about their nuclear arsenal):

“I have ordered two Nuclear Submarines to be positioned in the appropriate regions, just in case these foolish and inflammatory statements are more than just that. Words are very important, and can often lead to unintended consequences, I hope this will not be one of those instances.”

But with a probability of avoiding “unintended consequences” less than fifty-fifty by 2100? That really doesn’t feel like good enough odds to me.

The 1960s version of The Magnificent Seven (itself a remake of Kurosawa’s Seven Samurai) before most of them were shot dead

In my last post, I suggested that there appeared to be a campaign to impugn the character of the younger generation as cover for reducing graduate recruitment, partly because of the desire to make AI systems of various sorts handle a wider and wider range of tasks. However there are other reasons why the value of AI needs to be promoted to the point where if your toaster or fridge is not using a chip they absolutely should be. It is all about the dependence of the US stock market on the so-called Magnificent 7 companies: Alphabet (Google), Apple, Meta (Facebook), Tesla, Amazon, Microsoft and Nvidia whose combined market capitalisation as at 22 July was 31% of the S&P500.

Nvidia? Who are they? They produce silicon chips. As Laura Bratton wrote in May:

As of Nvidia’s 2025 fiscal fourth quarter (the three months ending on Jan. 26 of this year), Bloomberg estimates that Microsoft spends roughly 47% of its capital expenditures directly on Nvidia’s chips and accounts for nearly 19% of Nvidia’s revenue on an annualized basis.

Meanwhile, 25% of Meta’s capital expenditures go to Nvidia and the company accounts for just over 9% of Nvidia’s annual revenue.

Amazon, Alphabet and Tesla are also big customers.

Nvidia is a growth stock, which means that it needs continued growth to support its share price. Once it ceases to be a growth stock then the kind of price earnings ratio it currently enjoys (nudging up to 60, by comparison the price earnings ratio of, say, HSBC is around 17.5) will no longer be acceptable to investors and a large correction in the share price will happen. So a growth slowdown in the Magnificent 7 is big news.

What would prevent a growth slowdown? Well a lot of processing-heavy sales for Facebook, Amazon, Apple and Google primarily. That is why there is now an AI overview of your Google search, why Rufus sits at the bottom of your Amazon search and everything appears to have a voice activated capability which can be accessed via Alexa or Siri these days.

Of course I am not arguing that there are not uses for large language models (LLMs) and other technologies currently wrapped up in the term AI. Seth Godin, usually a first mover in this space, has produced a set of cards with prompts for your LLM that you can tailor for various uses. Many people are seeing how AI applications can cut down the time they spend on everything from diary management to constructing PowerPoint presentations. There is no doubt that use of AI will have changed the way we do some things in a few years’ time. It will not, however, have replaced all of the jobs in Microsoft’s list, from mathematician to geographer to historian to writer. If you want a (much) fuller critique of what is misguided about the AI bubble, I refer you to The Hater’s Guide To The AI Bubble.

There is a lot of rough surrounding a few diamonds and the conditions for a bubble are all there. We know this because we have been here before. On 10 March 2000, the dotcom bubble burst. As Goldman Sachs puts it:

The Nasdaq index rose 86% in 1999 alone, and peaked on March 10, 2000, at 5,048 units. The mega-merger of AOL with TimeWarner seemed to validate investors’ expectations about the “new economy”. Then the bubble imploded. As the value of tech stocks plummeted, cash-strapped internet startups became worthless in months and collapsed. The market for new IPOs froze. On October 4, 2002, the Nasdaq index fell to 1,139.90 units, a fall of 77% from its peak.

Fortune are now claiming that the current AI boom is bigger than the dotcom bubble. And even leading figures in the AI industry admit that it is already a bubble.

This is where it gets interesting. The FT, in its reflection on these parallels, appears to be comforted by the big names involved this time:

To be sure, the parallels are not exact. They never are. While most of the dotcom companies were ephemeral newcomers, the Mag 7 include some of the world’s most profitable and impressive groups including Apple, Amazon and Microsoft, as well as the main supplier to the AI economy, Nvidia.

But of course this is the reason why it’s worse this time. We were able to manage without the “ephemeral newcomers”, although Amazon‘s share price fell by 90% over 2 years and Microsoft lost 60%, so the comparison is not quite true. However these companies were not the foundations of the economy then that they are now.

If Nvidia is the essential supply chain for all the other 6 of the Magnificent 7, then its own supply chain is equally precarious. As Ed Conway’s excellent Material World points out, Nvidia is “fabless” (ie without its own fabrication plant) and relies on Taiwan Semiconductor Manufacturing Company (TSMC) for the manufacture of its processors. They in turn are completely dependent on the company which makes the machines essential to their manufacturing units, ASML. As Conway says:

As of this moment, ASML is the only company in the world capable of making these machines, and TSMC is, alongside Samsung, the only company capable of putting such technology into mass production.

And then there are the raw materials required in these industries. Much has been made, by Diane Coyle and others, of the “weightless” nature of our global economy. Conway demolishes this fairly comprehensively:

In 2019, the latest year of data at the time of writing, we mined, dug and blasted more materials from the earth’s surface than the sum total of everything we extracted from the dawn of humanity all the way through to 1950.

There is a place in North Carolina called Spruce Pines where they mine the purest quartz in the world. As one person Conway interviewed said:

“If you flew over the two mines in Spruce Pine with a crop duster loaded with a very particular powder, you could end the world’s production of semiconductors and solar panels within six months.”

Whereas China controls the solar panel market it is reliant on imports for its semiconductors. In 2017 this cost China more than Saudi Arabia exported in oil or the entire global trade in aircraft.

Conway muses on whether China would invade Taiwan because of this and concludes probably not.

“Even if China invaded Taiwan and even if TSMC’s fabs survived the assault…that would not resolve its issue. Fab 18 [TSMC’s plant] might be where the world’s most advanced chips are made, but they are mostly designed elsewhere”.

However it would certainly be hugely disruptive if that were your goal. So even if the share prices of the Magnificent 7 don’t plummet of their own accord, they might be eviscerated by a crop duster or an assault on Taiwan.

There are so many needles poised to prick this particular bubble it would seem prudent to be cautious as a company in how dependent you should make yourselves to AI technology over the next few years.

Last time I suggested that the changes to graduate recruitment patterns, due at least in part to technological change, appeared to be to the disadvantage of current graduates, both in terms of number of vacancies and in what they were being asked to do.

This immediately reminds me of the old Woody Allen joke from the opening monologue to Annie Hall:

Two elderly women are at a Catskills mountain resort, and one of ’em says: “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know, and such … small portions.”

This would clearly be an uncomfortable position for Corporate Britain if it were accepted. So a push back is to be expected. The drop in graduate vacancies is hard to challenge so the next candidate is obviously the candidates themselves.

So hot on the heels of “Kids today need more discipline”, “Nobody wants to work”, “Students today aren’t prepared for college”, “Kids today are lazy”, “We are raising a generation of wimps” and “Kids today have too much freedom” (I refer you to Paul Fairie’s excellent collections of newspaper reports through history detailing these findings at regular intervals), we now have the FT, newspaper of choice for Corporate Britain, weighing in on “The Troubling Decline in Conscientiousness“, this time backed up by a whole series of graphs:

John Burn-Murdoch does a lot of great data work on a huge array of subjects which I have referred to often, but I find the quoted studies problematic for a number of reasons. First of all, there is the suspicion that young people have already been found guilty before looking for evidence to back this up. For instance, which came first here the “factors at work” or the “shifts”?

While a full explanation of these shifts requires thorough investigation, and there will be many factors at work, smartphones and streaming services seem likely culprits.

At one point John feels compelled to say:

While the terminology of personality can feel vague, the science is solid.

At which point he links to this study, defending the five-factor model of personality as a “biologically based human universal” which terrifies me a little. Now of course there are always studies pointing in lots of different directions for any piece of social science research and this is no exception. In this critique of the five-factor model (FFM), for instance, we find that:

While the two largest factors (Anxiety/Neuroticism and Extraversion) appear to have been universally accepted (e.g., in the pioneering factor-analytic work of R. B. Cattell, H. J. Eysenck, J. P. Guilford, and A. L. Comrey), the present critique suggests, nevertheless, that the FFM provides a less than optimal account of human personality structure.

I first saw the FT article via a post on LinkedIn, where there was one mild push back sitting alone amongst crowds of pile ons from people of my generation. After all it feels right, doesn’t it? But Chris Wagstaff, Senior Visiting Fellow at Bayes Business School, was spot on I feel, when he pointed out four potential behavioural biases at play here within the organisations where these young people are working:

  1. The decline in conscientiousness and some of the other traits identified could be a consequence of more senior colleagues not inviting or taking on board constructive challenge from younger colleagues, the calamity of conformity, i.e. groupthink, so demotivating the latter.
  2. Related to this is the tendency for many organisations to get their employees to live and breathe an often meaningless set of values and adhere to a blinkered way of doing things. Again, hugely frustrating and demotivating.
  3. Or perhaps we’re seeing way too many meetings being populated by way too many participants, meaning social loafing (ie when individual performance isn’t visible they simply hide behind others) is on the increase.
  4. Finally, remuneration structures might discourage entrepreneurial thinking and an element of risk taking (younger folk are less risk averse than older folk). Again, very demotivating.

These sound much more convincing “factors at play” to me than smart phones or streaming services, neither of which of course are the preserve of the young. But demonising the young is an essential prelude to feeling better about denying them work or forcing them into some kind of reverse centaur position.

Corporate Britain needs to do better than pseudo-scientific victim blaming. There are real issues here around the next generation’s relationship with work and much else which need to be met head on. Your future pension income may depend upon it.

In a previous post, I mentioned the “diamond model” that accountancy firms are reportedly starting to talk about. The impact so far looks pretty devastating for graduates seeking work:

And then by industry:

Meanwhile, Microsoft have recently produced a report into the occupational implications of generative AI and their top 40 vulnerable roles looks like this (look at where data scientist, mathematician and management analyst sit – all noticeably more replaceable by AI than model which caused all the headlines when Vogue did it last week):

So this looks like a process well underway rather than a theoretical one for the future. But I want to imagine a few years ahead. Imagine that this process has continued to gut what we now regard as entry level jobs and that the warning of Dario Amodei, CEO of AI company Anthropic, that half of “administrative, managerial and tech jobs for people under 30” could be gone in 5 years, has come to pass. What then?

Well this is where it gets interesting (for some excellent speculative fiction about this, the short story Human Resources and novel Service Model by Adrian Tchaikovsky will certainly give you something to think about), because there will still be a much smaller number of jobs in these roles. They will be very competitive. Perhaps we will see FBI kind of recruitment processes becoming more common for the rarified few, probably administered by the increasingly capable systems I discuss below. They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible. The AI system is no longer working for you as some “second brain”. You are working for it, but no company is going to blame the very expensive AI system that they have invested in when there is a convenient and easily-replaceable (remember how hard these jobs will be to get) human candidate to take the fall. And it will be assumed that people will still do these jobs, reasoning that it is the only route to highly paid and more secure jobs later, or that they will be able to retire at 40, as the aspiring Masters of the Universe (the phrase coined by Tom Wolfe in The Bonfire of the Vanities) in the City of London have been telling themselves since the 1980s, only this time surrounded by robot valets no doubt.

But a model where all the gains go to people from one, older, generation at the expense of another, younger, generation depends on there being reasonable future prospects for that younger generation or some other means of coercing them.

In their book, The Future of the Professions, Daniel and Richard Susskind talk about the grand bargain. It is a form of contract, but, as they admit:

The grand bargain has never formally been reduced to writing and signed, its terms have never been unambiguously and exhaustively articulated, and noone has actually consented expressly to the full set of rights and obligations that it seems to lay down.

Atul Gawande memorably expressed the grand bargain for the medical profession (in Better) as follows:

The public has granted us extraordinary and exclusive dispensation to administer drugs to people, even to the point of unconsciousness, to cut them open, to do what would otherwise be considered assault, because we do so on their behalf – to save their lives and provide them comfort.

The Susskinds questioned (in 2015) whether this grand bargain could survive a future of “increasingly capable systems” and suggested a future when all 7 of the following models were in use:

  1. The traditional model, ie the grand bargain as it works now. Human professionals providing their services face-to-face on a time-cost basis.
  2. The networked experts model. Specialists work together via online networks. BetterDoctor would be an example of this.
  3. The para-professional model. The para-professional has had less training than the traditional professional but is equipped by their training and support systems to deliver work independently within agreed limits. The medical profession’s battle with this model has recently given rise to the Leng Review.
  4. The knowledge engineering model. A system is made available to users, including a database of specialist knowledge and the modelling of specialist expertise based on experience in a form that makes it accessible to users. Think tax return preparation software or medical self-diagnosis online tools.
  5. The communities of experience model, eg Wikipedia.
  6. The embedded knowledge model. Practical expertise built into systems or physical objects, eg intelligent buildings which have sensors and systems that test and regulate the internal environment of a building.
  7. The machine-generated model. Here practical expertise is originated by machines rather than by people. This book was written in 2015 so the authors did not know about large language models then, but these would be an obvious example.

What all of these alternative models had in common of course was the potential to no longer need the future traditional model professional.

There is another contract which has never been written down: that between the young and the old in society. Companies are jumping the gun on how the grand bargain is likely to be re-framed and adopting systems before all of the evidence is in. As Doctorow said in March (ostensibly about Musk’s DOGE when it was in full firing mode):

AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job

What strikes me is that the boss in question is generally at least 55. As one consultancy has noted:

Notably, the youngest Baby Boomers turned 60 in 2024—the average age of senior leadership in the UK, particularly for non-executive directors. Executive board directors tend to be slightly younger, averaging around 55.

Assume there was some kind of written contract between young and old that gave the older generation the responsibility to be custodian of all of the benefits of living in a civilised society while they were in positions of power so that life was at least as good for the younger generation when they succeeded them.

Every time a Baby Boomer argues that the state pension age increases because “we” cannot afford it, he or she is arguing both for the worker who will then be paying for his or her pension to continue to do so and that they should accept a delay in when they will get their quid pro quo, with no risk that the changes will be applied to the Boomer as all changes are flagged many years in advance. That contract would clearly be in breach. Every Boomer graduate from more than 35 years ago who argues for the cost of student loans to increase when they never paid for theirs would break such a contract. Every Boomer homeowner who argues against any measure which might moderate the house price inflation which they benefit from in increased equity would break such a contract. And of course any such contract worth its name would require strenuous efforts to limit climate change.

And a Boomer who removes a graduate job to temporarily support their share price (so-called rightsizing) in favour of a necessarily not-yet-fully-tested (by which I mean more than testing the software but also all of the complicated network of relationships required to make any business operate successfully) system then the impact of that temporary inflation of the share price on executive bonuses is being valued much more highly than both the future of the business and of the generation that will be needed to run it.

This is not embracing the future so much as selling a futures contract before setting fire to the actual future. And that is not a contract so much as an abusive relationship between the generations.

In my last post, I expressed a preference for the single transferable vote. So let’s look at the competition (a more detailed look at each from the Electoral Reform Society can be found here):

Party List Proportional Representation

Variants of this are the most common types of voting system in the world, being used in 80 countries. In the closed list variant, people just vote for parties and the parties then supply candidates in proportion. An open list system has a list of candidates to vote for, the vote both determining the party vote and ordering the candidates which are then supplied according to their proportional vote. A semi-open system means parties publish the order in which their candidates will be supplied but voters just choose parties. Constituency size also affects how these systems work.

The closed list system was used in the UK for European Parliament elections until we left the EU. These elections had consistently low turnouts in the UK and only about 5% of people were able to identify their MEP. So I think that probably disallows these systems for the UK.

Additional Member System

This is first past the post but with additional MPs added to make the overall numbers for each party proportional to the popular vote, arrived at with a second vote for a party on a party list basis, with all its disadvantages.

Imagine how many more MPs would have been required to make the last election proportional! For the 412 Labour MPs to only represent 34% of the seats we would need 1,212 in total, an increase of 562 (ie almost double). This, combined with the disadvantages of the party list system, disallows it for me I think.

Supplementary Vote

You get two votes instead of one, first choice is FPTP. If noone gets 50% of the vote, there is a run off between the top two where second choices are then added on to the candidates’ totals (although if your first choice is in the run off, your second choice is not counted). It is used to elected the London Mayor which obviously doesn’t required proportionality. Which is good, because it does not remotely provide it.

Alternative Vote

If noone gets 50% of the vote, the candidate who came last is removed and their votes allocated according to the second choices of the people who had that candidate as their favourite. And so on until someone does get 50%. However it is not a form of proportional representation as the ERS re-running of the 2015 election under a number of different systems shows:

Also, we have already voted against introducing this system (in 2011).

Alternative Vote Plus

This was a system invented by the Independent Commission on the Voting System (often referred to as the Jenkins Commission as it was chaired by Roy Jenkins) in 1998, which has never been implemented anywhere. It recommended using the Alternative Vote system for 80-85% of the seats in Parliament, then topping up from party lists to make the system proportional. Unfortunately, as ERS have pointed out, 15% of the seats would not be enough to achieve this.

Two-Round System

This is very similar to the alternative vote system, where if noone gets 50% of the vote in the first round, the top two candidates go through to the second round, with people’s second choices reallocated where their first choices did not make the top two. It is therefore not a proportional system. It also introduces a gap between the first and second vote, with uncertain consequences.

Borda Count

In this system there is one ballot paper with a list of candidates. You put a number next to each candidate, with your favourite at number one. These are converted into points with the candidates ranked last scoring one point, two for being next-to-last and so on. The candidate with the most points is the winner.

It is a recipe for tactical voting and is used in Eurovision – need I say more?

So how do these compare with the single transferable vote?

Single Transferable Vote

First a link to the video from my last post, explaining how it works, as a reminder (I highly recommend it and it is under 7 minutes long).

In this system, you have multiple seats per (larger) constituencies, with constituencies the size of 4-5 current constituencies. As a voter you number the candidates (you must vote for one and then its up to you). There is a quota (known as the Droop quota after its inventor Henry Droop) which is calculated as:

total votes / (total seats + 1)) + 1

This wacky formula is to adjust the normal requirement for a single MP election for them to get more than 50% of the vote to one where there are multiple seats available, and the “+ 1” is there to replicate the “more than” requirement.

If a candidate gets at least this number of votes, they are elected and their surplus votes (ie the ones in excess of the quota) are then reallocated to your second choice candidate. If noone reaches the quota, then the least popular candidate is removed and their votes reallocated until someone does.

The constituency should then end up with MPs approximately in proportion to the percentage vote of each of their parties (although independents can operate successfully within this system too).

This is a proportional system which still gives you a link to your MPs. The larger constituencies can line up with existing areas which make sense to voters, eg in Birmingham there are 10 constituencies currently within the Birmingham City Council region, which could be combined into two larger constituencies each represented by 5 MPs in proportion to the votes in each area.

I must get a couple of requests a week from campaigners to write to my MP in support of their latest campaign. My own experience of writing to my MP, who has a well organised and efficient office but has been in the role for a long time and feels he knows his own mind about most things by now, is the most I can expect is a return letter telling me all of the reasons why I am wrong about my position on whatever it is. Imagine a constituency where most of you had the choice of an MP who shared at least some of your concerns and was therefore more likely to help represent your views more widely. Imagine how much more empowered you would feel, how much more likely to get involved in politics, how much more likely to vote.

Imagine that effect rippling throughout the constituencies up and down the country. Imagine what it might do to voter turnout!

Source: ERS. Here the countries that use proportional voting systems are in purple and the countries that use non-proportional voting systems are in dark blue.

Is proportional representation (PR) likely to lead to more representation for smaller parties and therefore coalitions? Yes it is. But the mistake is in thinking that FPTP doesn’t lead to coalitions. The difference is that they are currently within a few big dominant parties trying to hold their different wings together on the left and the right. With PR those deals need to be done in public so that we can judge them and adjust our votes accordingly.

The Jenkins Commission mentioned earlier ended up rejecting STV on the basis that it moved to bigger constituencies (which does not seem a disadvantage in itself), had a more complicated voting system (which can be fully explained in a video under 7 minutes long) and “a tendency towards parochial politics”. It seems to me that time has moved on. The challenges we are facing increasingly are going to need local community responses. What Lord Jenkins might have called “parochial” from his rather lofty view of politics may be just what we need now.

Instead imagine that your vote counted at the next election even if you weren’t in the majority. Imagine most people having a sympathetic MP they could write to about things that mattered to them. Imagine MPs encouraged to represent the views they stood for election on to the full extent of their ability – no more having to sit in one or two buckets that aren’t really what they’re about because they are the only buckets that ever get elected. Imagine that all political parties win the proportion of seats they have earned as a result of their proportion of the vote, no more and no less. Imagine being able to vote for the party you prefer rather than needing to tactically vote to keep out your worst nightmare. All this could be yours.

All we need to do is demand it!