ShuffleComp Postmortem: Experimental Rules

One of the big objectives of ShuffleComp was to try out experimental approaches to comp-running. Minicomps, being one-offs with less pressure to serve as community pillars, seem like good places to try things out. So did I learn anything whatsoever?


For a start, this was fun. As for other effects: I know that at least one first-time author felt that the rule added to how comfortable they felt releasing a game. It also allowed participating authors to promote the comp in general without automatically directing votes to their game in particular. On the other hand, it does mean a little less attention for individual authors while the comp is ongoing – which may be a more important consideration for pro or semi-pro authors for whom reputation-management means substantially deal more than idle vanity. On the whole, though, I think that this was very much a Good Thing, and I would suggest that future minicomp organisers at least consider it.

No gag on author discussion

These were very much only-in-low-stress-minicomp rules. A high proportion of reviewers were also authors, so there wouldn’t have been much reviewing without it.

The effects of the no-discussion rule was to some extent nerfed by the pseudonym and positive-review rules; the risk of trading good reviews, or of being affronted by a bad review from a fellow-author, or of authors making fools of themselves by grouchily defending their games in comment threads, were greatly reduced. I don’t think we can say much about the broad effects of this one until it gets tested in a more neutral environment. (My position is that the restriction on discussion remains a Very Good Idea for major comps.)

Personally, I elected not to review games at all, partly because of organiser neutrality but mostly because I was tired. And this was kind of crazymaking, honestly, because writing about stuff is a big component of how I think about stuff, and some of these games did thought-provoking things. Later. Later.

Reviews count as votes for games

The internet has different cultures for criticism and rating. The IF community has traditionally had a pretty tough critical culture: we expect that everybody who makes a game is dedicated to the rocky road of artistic growth, and feel that cotton-wool is a poor growth medium. (And also we have a certain number of people whose only joy is to grumble about shit.) This comes from a number of places – a dissatisfaction with the publisher-driven mainstream games media (which has often been little more than advertising), one or two cultural imports from academia, a certain amount of defensive pride about the high standards of our amateurism. If you come from a place where a score lower than 8/10 means ‘don’t play this’, or where most commentary is either unqualified praise or outright hatred, we can seem awfully mean.

And people respond differently to different approaches. Some people really need to have their first effort systematically torn into tiny shreds in order to do better next time; that is how they will best flourish. Some people do better with other approaches. (And not everyone even wants to do better next time. That is profoundly, deeply weird to me, but I dunno that it’s therefore invalid.) There is not any good way to tell who will actually respond best to which critical environment – I sure as hell don’t think that the authors themselves would reliably know – but I think having more than one option is, at least, hopeful.

At the same time, I believe very, very strongly in the responsibility of the reviewer to be honest and clear about their experience of a game. So the goal of the rule was to carve out a bit of space in which reviewers were encouraged to write reviews of games they did consider worthy, emphasize that they didn’t have to review every game, maybe encourage them to delay negative reviews until after the voting period, while not actually muzzling anyone. If you want to write reviews of a game you’re not keen on, you have a number of options – wait until after the voting period to post them,  write reviews for every game and thus make your review votes moot, or just cancel out your own No vote. (Yes, I’m aware that submitting one Yes and one No vote would not have the same effect as submitting no votes at all.) Combine that with the fact that the precise vote a game gets doesn’t matter all that much, and it adds up to some pretty mild motivations. Which was the idea.

(To be clear, I really wouldn’t want this premise applied to the more serious affairs of IF Comp and Spring Thing, say; but I thought it’d be a good fit for lighter, lower-pressure minicomps.)

So how did this work out? Obviously it’s impossible to tell what the reviews would have looked like without this rule. Since a good chunk of the reviews were written by game authors, it seems plausible that they’d have tended towards a more convivial, Miss Congeniality-ish tone anyway.

That said, it was very clear that this rule – mild as it was – bothered more people than any of the other experiments. Some people pushed back against it by reviewing every game.  Others told me that it felt weird to be adjusting their reviewing approach. That’s valuable, I think; it’s important to re-appraise stuff every now and then, and if it doesn’t feel weird then you’re not really re-appraising. There were still a number of sharp-toned reviews, or reviews that concluded No. Great! If this rule had resulted in an unbroken stream of sappy positivity, it’d have been a clear signal that it was too strong.

In general, it seemed to me that the rule – or, at least, the fact that there was a voting process – did result in more reviews than we might otherwise have seen. I’d strongly encourage future minicomp organisers to think about how to motivate reviews, and to regard voting as a key component of that.

No archive-unfriendly games

This rule was spurred by a couple of experiences. On the one hand, I’ve been writing a sequel to Joey Jones’ goofy meta-IF romp IFDB Spelunking, and in the process discovered a significant number of games that have vanished entirely – and not just little SpeedIF-level games or things from the distant past, either. On the other, I’ve talked with Emily Short about how Bee, my favourite CYOA ever, is reliant on the continued existence of the Varytale site: the text is safe, and the work could, in theory, be ported to another platform, but it’d essentially involve rebuilding the mechanics of Varytale as well as the game itself.

So partly there’s an issue about platform creators making game platforms that can’t (or aren’t meant to) survive in the wild, and partly there’s an issue about authors not archiving their work even when they could do so. This rule was obviously just about the first.

One problem is that archive-impossible and archive-awkward platforms do exist, and a lot of them are pretty cool in other respects, and authors are going to use ’em. (‘Robust archiving sensibilities’ is never going to be a killer feature.) So I think that fixing this by putting pressure on authors is probably not an ideal route: the platforms are there, and authors are looking for the platform best-suited to their work’s mechanical and aesthetic requirements, which have nothing to do with preservation. Is there a way to apply this pressure to platform creators instead? In the case of platforms which were designed primarily with commercial uses in mind, I kind of doubt it;  if they’ve decided that archive-friendliness and/or playable-offline are in conflict with their ability to make money from games, there’s no real counterargument.

Another part of the issue is the nature of the IF Archive; it is built really as a vault, more concerned with preservation than access, while authors are more immediately interested in a distribution platform. My feeling is that in order to function well at either of these it is necessary to function well at both, but I don’t have (or want) the job of actually running a thing like that.

Authors can vote

I didn’t see the actual vote results, but I’d expect that a significant proportion of voters were also entrants. So at a practical level, this seems sort of a necessary feature in a voting minicomp, particularly if it gets relatively high participation.

Yes/No voting rather than a 10-point score; no ranking stats released

Again, by shutting myself out of access to the actual scoring totals, I’m unable to judge the effects of this too closely. On the whole, though, the top-ten results don’t seem very divergent from what you’d have expected with more graduated voting. That said, I think avoiding a precise ranking of games – particularly for games outside the Commended grade – contributed strongly to the moderate-pressure tone of the event. Introcomp, which partly inspired this approach, releases rankings for the top-placed games but not the lower-placed ones; that could also have worked, but I felt that given the wackiness of the reviews-as-votes rule and the relatively low voting in minicomps, it was more honest to honour top-placed games as a whole rather than encouraging a focus on hair-splitting stats.

I expected a lot more complaints about this, given how much the IF community loves its comp statistics, and given grumbles I’ve heard regarding the lack of transparency in Introcomp and XYZZY voting; but as it turned out, very few. My feeling is that people accepted this as part and parcel of the moderate-pressure pitch of the comp, and I’d encourage similar approaches in future minicomps.

This entry was posted in interactive fiction and tagged . Bookmark the permalink.

5 Responses to ShuffleComp Postmortem: Experimental Rules

  1. Andrew Plotkin says:

    “the IF Archive [is] built really as a vault, more concerned with preservation than access, while authors are more immediately interested in a distribution platform. My feeling is that in order to function well at either of these it is necessary to function well at both…”

    I am a big fan of the cooperative model, where several different sites each try to do one thing well. I say this because I could not have built IFDB, and Mike Roberts built IFDB without having to create and populate an IF vault. is another narrowly-focussed service that ties into this. (I would love to see an “” that hooks into the Archive.)

    As I said in my blog post, I think the answer is simply to keep the discussion alive among platform designers. ShuffleComp at least brought the issue of Seltani import/export back onto my radar.

    • Yeah, to a great extent I think the best anyone can do is to keep the topic visible.

      I get the reasons for the cooperative model; it’s just that I’ve seen a lot of things fall through the gaps. IFDB, unlike Baf’s, offers indexing without archiving. (Yes, it offers a tool for authors to upload to the Archive, and that probably helps to some extent.) I suspect that, in at least some cases, it makes authors less likely to consider archiving – they’ve filed their database entry, administrivia done, right?

      And obviously I can’t tell how much is just due to authors not archiving for other reasons, like loss of control.

  2. Emily Short says:

    I both liked and struggled with the review = vote rule. It was a bit of a relief not to feel like I needed to review everything (especially with so much to cover in such a tight timeframe), and I often don’t enjoy writing negative reviews and would rather say nothing at all.

    On the other hand, writing reviews is part of how I process dissonant experiences of games — where I liked it but thought there were some issues, or didn’t like it but respected it anyway, or whatever — so in those cases the rule ran against my habitual behavior. (Which is fine! I can get some new habits.)

    Anyway, it’s cool to have something a bit different in the mix.

  3. Yoon Ha Lee says:

    Reviews as votes: I liked this as an experiment. I am usually very tough with reviews/review-reactions, because I come from a culture where the review is for the benefit of the potential reader (player in the case of an IF) rather than a way for the author to get a personalized tutorial course, and this is basically my approach to something like IFComp. I have no problem writing negative/scathing reviews. (I think people have figured this out.) But I agree that it’s good to have a set of rules designed to encourage a gentler authoring/review-receiving experience–to have a wider set of comp/minicomp experiences available to accommodate different styles/preferences. In my case, I opted not to post largely negative reviews of games at all, on the grounds that that would be closer to the spirit of the intent. Of the two reviews I had time to post (I played around 1/3 of the games before other obligations got in the way, and opted not to vote on the one game I beta-tested), one was a Yes vote for a game that I didn’t love but thought was well-done.

    In any case, I’d love to see more experimentation designed to encourage gentler reviews, for people who prefer that sort of thing. (Me, I like my firehoses, but I am not everyone!)

  4. Jason Dyer says:

    I grok Emily’s comment about working out dissonant feelings. There’s one case where I set out to write a negative review but in the process of writing it everything clicked for me and I switched to liking the story quite a bit (I subsequently modified my review before I posted it). If I had done no review at all it likely would have just been a “No” vote from me.

    sidenote: The way author-voting works in contests in the DROD community (which are all based on a 1-10 scale) is to instruct the authors flat-out to give themselves a 10. That’s a plausible way to allow author voting if anyone wants to run a minicomp which isn’t just yes/no.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s