One of the big objectives of ShuffleComp was to try out experimental approaches to comp-running. Minicomps, being one-offs with less pressure to serve as community pillars, seem like good places to try things out. So did I learn anything whatsoever?
For a start, this was fun. As for other effects: I know that at least one first-time author felt that the rule added to how comfortable they felt releasing a game. It also allowed participating authors to promote the comp in general without automatically directing votes to their game in particular. On the other hand, it does mean a little less attention for individual authors while the comp is ongoing – which may be a more important consideration for pro or semi-pro authors for whom reputation-management means substantially deal more than idle vanity. On the whole, though, I think that this was very much a Good Thing, and I would suggest that future minicomp organisers at least consider it.
No gag on author discussion
These were very much only-in-low-stress-minicomp rules. A high proportion of reviewers were also authors, so there wouldn’t have been much reviewing without it.
The effects of the no-discussion rule was to some extent nerfed by the pseudonym and positive-review rules; the risk of trading good reviews, or of being affronted by a bad review from a fellow-author, or of authors making fools of themselves by grouchily defending their games in comment threads, were greatly reduced. I don’t think we can say much about the broad effects of this one until it gets tested in a more neutral environment. (My position is that the restriction on discussion remains a Very Good Idea for major comps.)
Personally, I elected not to review games at all, partly because of organiser neutrality but mostly because I was tired. And this was kind of crazymaking, honestly, because writing about stuff is a big component of how I think about stuff, and some of these games did thought-provoking things. Later. Later.
Reviews count as votes for games
The internet has different cultures for criticism and rating. The IF community has traditionally had a pretty tough critical culture: we expect that everybody who makes a game is dedicated to the rocky road of artistic growth, and feel that cotton-wool is a poor growth medium. (And also we have a certain number of people whose only joy is to grumble about shit.) This comes from a number of places – a dissatisfaction with the publisher-driven mainstream games media (which has often been little more than advertising), one or two cultural imports from academia, a certain amount of defensive pride about the high standards of our amateurism. If you come from a place where a score lower than 8/10 means ‘don’t play this’, or where most commentary is either unqualified praise or outright hatred, we can seem awfully mean.
And people respond differently to different approaches. Some people really need to have their first effort systematically torn into tiny shreds in order to do better next time; that is how they will best flourish. Some people do better with other approaches. (And not everyone even wants to do better next time. That is profoundly, deeply weird to me, but I dunno that it’s therefore invalid.) There is not any good way to tell who will actually respond best to which critical environment – I sure as hell don’t think that the authors themselves would reliably know – but I think having more than one option is, at least, hopeful.
At the same time, I believe very, very strongly in the responsibility of the reviewer to be honest and clear about their experience of a game. So the goal of the rule was to carve out a bit of space in which reviewers were encouraged to write reviews of games they did consider worthy, emphasize that they didn’t have to review every game, maybe encourage them to delay negative reviews until after the voting period, while not actually muzzling anyone. If you want to write reviews of a game you’re not keen on, you have a number of options – wait until after the voting period to post them, write reviews for every game and thus make your review votes moot, or just cancel out your own No vote. (Yes, I’m aware that submitting one Yes and one No vote would not have the same effect as submitting no votes at all.) Combine that with the fact that the precise vote a game gets doesn’t matter all that much, and it adds up to some pretty mild motivations. Which was the idea.
(To be clear, I really wouldn’t want this premise applied to the more serious affairs of IF Comp and Spring Thing, say; but I thought it’d be a good fit for lighter, lower-pressure minicomps.)
So how did this work out? Obviously it’s impossible to tell what the reviews would have looked like without this rule. Since a good chunk of the reviews were written by game authors, it seems plausible that they’d have tended towards a more convivial, Miss Congeniality-ish tone anyway.
That said, it was very clear that this rule – mild as it was – bothered more people than any of the other experiments. Some people pushed back against it by reviewing every game. Others told me that it felt weird to be adjusting their reviewing approach. That’s valuable, I think; it’s important to re-appraise stuff every now and then, and if it doesn’t feel weird then you’re not really re-appraising. There were still a number of sharp-toned reviews, or reviews that concluded No. Great! If this rule had resulted in an unbroken stream of sappy positivity, it’d have been a clear signal that it was too strong.
In general, it seemed to me that the rule – or, at least, the fact that there was a voting process – did result in more reviews than we might otherwise have seen. I’d strongly encourage future minicomp organisers to think about how to motivate reviews, and to regard voting as a key component of that.
No archive-unfriendly games
This rule was spurred by a couple of experiences. On the one hand, I’ve been writing a sequel to Joey Jones’ goofy meta-IF romp IFDB Spelunking, and in the process discovered a significant number of games that have vanished entirely – and not just little SpeedIF-level games or things from the distant past, either. On the other, I’ve talked with Emily Short about how Bee, my favourite CYOA ever, is reliant on the continued existence of the Varytale site: the text is safe, and the work could, in theory, be ported to another platform, but it’d essentially involve rebuilding the mechanics of Varytale as well as the game itself.
So partly there’s an issue about platform creators making game platforms that can’t (or aren’t meant to) survive in the wild, and partly there’s an issue about authors not archiving their work even when they could do so. This rule was obviously just about the first.
One problem is that archive-impossible and archive-awkward platforms do exist, and a lot of them are pretty cool in other respects, and authors are going to use ’em. (‘Robust archiving sensibilities’ is never going to be a killer feature.) So I think that fixing this by putting pressure on authors is probably not an ideal route: the platforms are there, and authors are looking for the platform best-suited to their work’s mechanical and aesthetic requirements, which have nothing to do with preservation. Is there a way to apply this pressure to platform creators instead? In the case of platforms which were designed primarily with commercial uses in mind, I kind of doubt it; if they’ve decided that archive-friendliness and/or playable-offline are in conflict with their ability to make money from games, there’s no real counterargument.
Another part of the issue is the nature of the IF Archive; it is built really as a vault, more concerned with preservation than access, while authors are more immediately interested in a distribution platform. My feeling is that in order to function well at either of these it is necessary to function well at both, but I don’t have (or want) the job of actually running a thing like that.
Authors can vote
I didn’t see the actual vote results, but I’d expect that a significant proportion of voters were also entrants. So at a practical level, this seems sort of a necessary feature in a voting minicomp, particularly if it gets relatively high participation.
Yes/No voting rather than a 10-point score; no ranking stats released
Again, by shutting myself out of access to the actual scoring totals, I’m unable to judge the effects of this too closely. On the whole, though, the top-ten results don’t seem very divergent from what you’d have expected with more graduated voting. That said, I think avoiding a precise ranking of games – particularly for games outside the Commended grade – contributed strongly to the moderate-pressure tone of the event. Introcomp, which partly inspired this approach, releases rankings for the top-placed games but not the lower-placed ones; that could also have worked, but I felt that given the wackiness of the reviews-as-votes rule and the relatively low voting in minicomps, it was more honest to honour top-placed games as a whole rather than encouraging a focus on hair-splitting stats.
I expected a lot more complaints about this, given how much the IF community loves its comp statistics, and given grumbles I’ve heard regarding the lack of transparency in Introcomp and XYZZY voting; but as it turned out, very few. My feeling is that people accepted this as part and parcel of the moderate-pressure pitch of the comp, and I’d encourage similar approaches in future minicomps.