The Entropy Cage, by Stormrose, is a gloomy SF story about AI management and its consequences for Future Dystopia.
Really must say again, though: damn that is a late-90s-feeling cover. You can almost feel the oversized paperback in your hands.
In the nearish future, much of society’s scut work is managed by sub-sentient AIs. Sub-sentients operate somewhere around the level of bright animals, if a bit more language-oriented. They act to protect their interests, and can be offered a relatively small set of simple incentives: punishment, reward, freezing. They can therefore, in theory, be trained to good behaviour by simple conditioning (and destroyed if the conditioning fails too badly). Making those decisions is your job, or was; you’re suspended pending review. But something seriously weird is going on, so you’re allowed back, just until this gets sorted out.
‘AI psychologist’ is a venerable idea by SF standards – somewhere around seventy years old. (And of course humans-become-reliant-on-machines, machines-break-down is even older.) In Asimov’s world, being the first robopsychologist makes you a high-powered industrial scientist; in The Entropy Cage, it makes you a precariously-employed consultant. (If there’s a theme about the effect of widespread AI on employment here, it’s probably the result of my overreading.)
The heart of the game is a Papers, Please-like sequence in which sub-sentients self-report their issues and you are expected to deal with them. (There’s a bit of a tension in the premise, here: on the one hand you’re meant to be a uniquely skilled professional whose expertise is so valuable in this crisis that the company needs you and specifically you, but on the other your actual job is that of a rubber-stamp functionary, micromanaged by a shitty boss and unable to make much difference.) The issues are sometimes catastrophic (caused a major accident, many people are dead) but others involve subs defending their jobs (and probably their existence?) against charges of being borderline-inefficient. Slowly, evidence emerges that subs are struggling against one another in a major conflict over resources (and also, vaguely, ideology; this seems to have been a bigger component in the game’s concept than is fully reflected in the text).
I was kind of expecting this to follow a familiar science-fiction trail, in which the sub-sentients turn out to actually be sentient and humans only thought otherwise because of Meat Prejudice. Which is a valid story, but one which has been around for for a Very Long Time and has pretty much lost any power to surprise; in its most basic form, it’s a story that relies on denying its own premises, that fails to address the question it has posed.
Instead, rather unexpectedly, the game ends up coming down to a strict-utility vs. inviolable-rights choice: either you can grant the subs rights over their assigned resources (thus preventing them from being destroyed, but making them harder to train efficiently), or you can turn their management over to a supervisor AI (which will manage for maximum efficiency). It doesn’t quite ring true that the protagonist is offered this choice by their generally-shitty supervisor, who presumably knows enough to realise that this is a decision that will have massive consequences.
The strict-utility choice switches, rather abruptly, to an ending in which artistic creativity is stifled because the AI overseer determines that new art doesn’t offer enough value over recycling old. Philosophical arguments against this aside, this feels like a pretty strange tangent, because we’ve seen nothing about the involvement of subs in artistic production thus far, and all the specific examples we’ve been given involve subs working in considerably less complicated tasks, like operating traffic lights and elevators. The stakes could, I think, have been illustrated a bit better here; these aren’t tasks for which you really need anything approaching the kinds of entities depicted here.
The writing is generally in the gets-out-of-the-way camp: because the majority of the text appears as online chat or AI reports, it doesn’t have a huge amount of opportunity either to shine or fail. Outside this, the prose does an admirable job of establishing mood and presenting the situation in compact terms; its more stylised elements are a little hit-and-miss, though.
You drag your bones to the centre of your room. Thankfully the rest of you follows. As a perpertual prisoner of physics, you realise minor decisions can be character building.
That’s almost good – there’s quite a lot of material in a small space, concepts are clearly-conveyed without feeling hackneyed or overstressed, the metaphor’s extension suggests smart-assery grown weary – but I can’t help feeling it needed one more rewrite to nail it.
It could have used a little more proof-reading for small errors (comma misplacement, typos and misspellings, tense shifts). The styling is light and simple, but effective.
This is a ‘pretty good, as far as it goes’ kind of piece; it contains some decent craft and is pretty effective at the thing it sets out to do, but I couldn’t find a huge amount to get really excited about. For idea-driven SF, it didn’t feel as though it was pushing exciting enough ideas; in particular, the author’s notes talk about how the core concept was about what AI religion might look like, but we’re given so much of an outsider’s view that the religious elements feel like a largely irrelevant detail. Similarly, there’s some evocation of BDSM themes in the language – the sub-sentient AIs are routinely referred to as ‘subs’, and introduce themselves with ‘punish me’ – but apart from making things feel a little creepier, that’s as far as that theme is developed.