Booknotes: The Last Human
This is a weird book. The best analogy I can come up with (like always when I say this, I mean “the best contrived joke I could come up with”) is that it’s like Homestuck if Homestuck were written by a less po-faced version of the guy who wrote The Three Body Problem, instead of the guy who ended up becoming an eboy or whatever that was. It has the cosmic ambition, scale and getting emotionally attached to some weird aliens but rather than spreading that over thousands and thousands of pages of comic, it compresses it into a novel I got through in an evening. This is trying to be a Book Of Ideas but also a Constant Peril Romp and while the attempt to be both does shake it apart in the end, it does an impressive job at holding those two things together for as long as it does.
The initial gimmick is that it’s about The Last Human: a child who has been adopted by a spider-monster who lives in a vast interspecies space empire where humans are hated by pretty much everyone. She’s pretending to be another species but is, of course, revealed, and Space Hijinks (Up To And Including Her “Doing Genocide Kind Of By Accident”) Ensue. This dominates the first 1/3-1/2 of the book and it's really good fun. The world is fun, there are lots of fun creatures and concepts, and I really particularly like the mother-daughter relationship between Sarya, the protagonist and her spider-monster mother: the “unusual lethal thing that adopts a child” dynamic is as fun here as in Terminator 2. The section where the main character is reliving her mother’s memories of becoming attached to her was probably my favourite part of the whole book; I love seeing ruthless alien killing machines have their hearts melted by a cute widdle baby—but also it’s given time to breathe. I say it's my favourite relationship in the book, but honestly it's sort of the only real relationship in the book; the main character’s friendships with the characters introduced later in the story weren’t developed enough for their payoff later on to feel earned.
This gets to the core problem with this book, which is that it should’ve been more than one book. It’s a first novel, and boy can you tell: the extent to which it is overstuffed is fun but also means that many of the things with which it is stuffed get less attention than perhaps they deserve. There are more prosaic problems, too: sometimes descriptions that seem to be clear in the head of the author are less clear when presented on the page; sometimes situations that are intended as the beginning of a pull-back-and-reveal are insufficiently telegraphed as such and had me flipping back through to try and find where they had been set up; sometimes the mechanics of the story are way too transparent and Big Reveals have their impact undermined.
Speaking of: the other key plank here is that the space empire is administered in a [handwave] fashion by a vast computer network who has implants in every being who belongs to it. Said beings are all categorised by an intelligence level system going from one to five: one being, idk, an amoeba or something, five being a “planetary-scale intelligence”. This is the thing that initially drew me to the book was that Shona (who mentioned it in a video and when I said I liked the sound of it got it for me for my birthday, thanks Shona 🎉) talked about this scale system and I love in-universe systems like that.
The thing is, there was something about the way the book treated it that meant I kept expecting the “intelligence level” thing to be revealed as somehow spurious, but it never really pulled the trigger on it, which felt a bit strange. There are a lot of points where a character who is lower-level actually isn't, or a “level” higher than another will be thinking about how easy it is for them to manipulate the lower “level” one, but with a strong undercurrent that This Is Bad and one should not treat people as things, and then it’s revealed they're being manipulated in their turn by an even higher-level character, but the way this book handles intelligence is basically by making every ‘high-level’ character into someone who makes fast and near-perfect inferences from limited data.They never seem to make mistakes, though; they only ever get rug-pulled by people who are a level up. There's never any point where they draw all these conclusions and then, oh wait, that was wrong, whoops. "Very smart" is not the same as "infallible"
Speaking of higher-level characters, though: can you guess whether the vast computer network who has implants in ever being who belongs to it is possibly some sort of vast, galaxy-spanning AI superintelligence? Here’s the thing about galaxy-spanning AI superintelligence, right. If you’re writing them, you’re pretty much screwed from the off. As a writer, you can create a Very Smart Person character and get them to be so in your writing whether or not you yourself are a Very Smart Person by exercising the magical powers of authorship and reverse-engineering action from desired outcome. A galaxy-spanning AI superintelligence really requires a bit more than that, though. There are a whole bunch of people who spend all their time thinking about AI superintelligence and the most interesting thing they’ve been able to come up with is the Basilisk, which is incredibly stupid but is at least fun. I’m not saying you shouldn’t try but it’s very hard to do. What you end up with is an AI that’s near-functionally God but, when it actually appears, just talks like a normal person who happens to be omniscient. There’s no sense, as there might be, of awe at approaching the divine, really. There's a bit of "see the world as I see it" but not really a lot of fear and trembling.
It’s also a bit… motiveless? The AI superintelligence here did not seem to want anything more than to keep things ticking along. I guess you’d say “the maintenance of peace amongst all the species of the galaxy” is a big goal but, at a certain stage you have to ask, to what real end, and moreover, what was the actual system of… well, anything? We see some isolated bits of commerce, and throughout there’s a bunch of stuff which suggests that Network is also some kind of corporation, which, again, I kept expecting to go somewhere, and didn’t. This isn’t toilets-on-the-Death-Star stuff here, this is legitimately important because later on in the book there’s a great deal made of whether or not we think the benevolent hegemony of this Network is good, and that’s very hard to judge when a lot of the worlds’ detail is unclear. Am I asking for this already-overfull book to add more to its plate? Who can say (yes).
You might be detecting that this book really likes Big Stuff and for all that bellyaching I thought it did a decent job conveying an impression of scale. Arguably the whole book, on some level, is about scale and levels, but specifically here I’m talking about the size of the space empire. While a lot of it did come down to just saying “billions and trillions” over and over, it played around with that scale for a bit to give you a chance to orient yourself, and I thought it really worked! The stuff later on about shooting bullets at planets like you’re that run of X-Men from the mid-2000s : good.
The Network AI is in a big 10D chess game with an organic cluster-mind called Observer, which largely seemed to exist to set up an order/chaos dichotomy, and in general is fine but feels a bit thin on mechanical detail. An consciousness emerging out of a huge, huge, huge amount of networked computers? Sure, I have no trouble buying that. A collective intelligence that’s basically just “a bunch of dudes who are somehow the same?” You might have to unpack that one, even just a little, to take me with you on it. This is pretty much the big issue with the book's back half: once it lets things got abstracted a little too far, the author decided to get a bit too “it’s like poetry, it rhymes”. There are what feel like plot holes, and there are points, particularly later on, where it’s not clear on exactly what plane of materiality we’re operating, particularly after the main character gets swallowed by a sentient liquid monster which exists to catalogue all the information in the universe, and their consciousness is resurrected by the AI. It also butts up against the problem that all books of this kind do: you’re probably not going to be able to write something that’s actually capital-P profound in a philosophical sense, so what you get is ancient aliens again, a weird location where The Abstractions Of Higher Thought Are Visualised As A Beach, etc.
I feel like I'm putting the boot in quite a lot here, but I did very much like the book. It's an enjoyable read and it has a lot of energy to it. It's really trying for something, and I appreciate the ambition even if its reach does frequently exceed its grasp. There are just so many ideas in here which I think could've really shone if allowed to fully develop.