Tell Us What You Really Think

In a “Hugos/Sad Puppies Guest Post by Jameson Quinn”, “the guy who came up with the basic idea for the E Pluribus Hugo proposal”, does his best to unravel the coalition that passed it on its first reading at Sasquan.

In order to understand this, it’s important to see that the Sads actually did have the germ of a valid grievance: in past years, many Hugo nominators have been from a pretty small and insular group of authors, editors, and hardcore fans, who often know each other personally and whose vote is probably influenced to some extent by factors extraneous to the work itself. Writings from authors who are personally well-liked, or whose overall body of work is stronger than the individual writing, probably have had a bit of an unfair advantage in getting nominated.

Of course, that’s not to endorse the Sad Puppy point of view. Of their three complaints — that the Hugos have been too artsy-fartsy, that they have been too political, and that they have involved logrolling — the first two are sour grapes, the second two are hypocritical, and the relationship between the three exists only in their heads. Only the third could be even slightly legitimate as cause for organized action; but certainly not for the action they took, which was basically to vandalize the awards as a whole, without any hope of actually accomplishing their objectives.

The efficiency of Quinn’s self-sabotage is impressive when you consider that half of the post is wasted concern-trolling the Republican primaries.


Discover more from File 770

Subscribe to get the latest posts to your email.

233 thoughts on “Tell Us What You Really Think

  1. John Lorentz said:

    “Any attempt to determine how well EPH works with real data needs to actually work with real data–in other words, with all the different spellings that people use when they nominate.


    Especially because my attempts at the Business Meeting to indicate how much work is involved at that stage were brushed aside (at best) and belittled (at worst) …
    .I realize that it’s so boring to accept someone’s real-life experiences (as a four-time Hugo Administrator) with the matter when it doesn’t match up with the theoretical discussions online on Making Light… ”

    Data cleaning has to happen in any case. The reason your objections are not “accepted” is because your argument isn’t compelling. “Because I said so” might work in a year when the Award wasn’t being attacked.

  2. Mark Dennehy: Somebody can tell me if I’m wrong, but isn’t John Lorentz trying continually to point out that in the nominating phase data was only cleaned up if it seemed likely to be added to the total of nominees that were accumulating some critical mass of votes?

    Therefore, to clean up 100% of the data would mean work beyond what was done.

  3. So Kate The Impala said any forum that would have her, she would seek suggestions from. So is she coming to 770? Whatever? Not a blog?

  4. It’s easy to do that manually, not so easy when the data has to match exactly for EPH to use it.

    It certainly feels easier, but that’s because you’re running the cleanup algorithm on this monstrously powerful parallel processor that has more computing power than ever electronic computer in the world combined 😛

    But we do have better ways to do data cleanup these days, at least for where we have huge datasets and limited human manhours to do the cleanup in. They’re utterly unrelated to EPH, but they go a lot further than simple things like grouping by edit distance and the like and they’re faster and more reliable than humans; and the cases they can’t clean up can get flagged for human attention. So the human gets to audit (say) 100 nominations to make sense of them while the computer manages the 9,900 other “easy” cases which would have otherwise just eaten all the human’s time and gotten them so weary that they might have missed one of the 100 trouble cases.

  5. Sjw75126 on September 1, 2015 at 4:39 pm said:
    I have no problem with Jameson Quinn’s rational, low key discussion of the voting system he gave to us. Just gratitude for his help.

    Thank you for saying, this I agree with you 100%

    I read Jameson’s post with admiration, because as an expert in voting systems, he laid out a clear rationale for the differences between the voting systems. And bringing in the Republican primary was an excellent way to point out the weaknesses of ‘first past the post’ in a situation where you have a focused, lockstep minority facing off against a majority whose preferences are all over the map this early in the process. I bet the Republican Party wishes they had something like IRV working in their primary right now as they look on in horror at what The Donald is doing to them.

    I sincerely hope EPH passes next year, even though I can’t go to the con to vote for it.

    Thank you, Jameson for your hard work on this.

  6. Mike Glyer: Somebody can tell me if I’m wrong, but isn’t John Lorentz trying continually to point out that in the nominating phase data was only cleaned up if it seemed likely to be added to the total of nominees that were accumulating some critical mass of votes?

    Yes, but someone still has to eyeball those entries manually and figure out what they are in order to determine that they don’t need to be cleaned up. A bit of time and effort is saved there — but time and effort still has to be expended on those entries.

    I don’t find that argument compelling. I don’t remember how many entries were in the 1984 data (and, obviously, it’s a lot less than this year), but I cleaned/normalized it all — manually — in just over an hour.

  7. Therefore, to clean up 100% of the data would mean work beyond what was done.

    Er, yeah, about that…
    You have to clean up 100% of the data all the time. Not doing it is cutting corners and most of the time it’s an acceptable way to get the job done if you only have humans and only their spare time at that.

    But humans make mistakes. Which means that sometimes you just go “oh, X won’t get past 5%, we just eliminate this vote” and you’ve mistakenly eliminated a vote for Y because you thought it was a vote for X because it’s half eleven at night and you’ve been doing this for two hours and your wife is giving you that look that says if you don’t wrap it up for the night and take out the garbage then you’re going to be sleeping on the couch again.

    EPH can’t fix that, it’s just highlighting it for the first time.
    But look, we have other tools we can point at this problem to make it easier on the Hugo administrators. Seriously, run the tape of the M2-F2’s crash and play the theme song. We have tools that can track trends on twitter, extracting them from tens of thousands of messages a second, we have tools that can spot copyrighted video on Youtube even though they upload just over an hour of video footage to Youtube every second. The tools really are that good.

  8. @May Tree: If I understand correctly (as someone who has followed the EPH discussion off and on), the “vote tokens” system you suggest can be gamed by someone who says “I actually like all five of these books, but in order to maximize the chance that at least one of my preferences becomes a finalist, I will put all five of my tokens on the book that I think other people will also like.”

    (Can EPH also be gamed? In theory, yes. But the corner cases in which it is vulnerable are so rare, and require so much knowledge of how other people are likely to vote, that it’s not practical to do it in the context of a Hugo election.)

  9. The point of fixing spelling before releasing data is that spelling can be a clue to identity, and we want to maintain anonymity. I’d happily run EPH on raw, misspelled data; that’s what I did with the 1984 data, and I figured out an off-the-shelf routine that got 99% of the matching right in 1 step (once I had properly-tuned parameters).

  10. Tim McDonald on September 1, 2015 at 6:36 pm said:

    So, you are putting a mathematical system out there for voting, with very clear rules, and your primary adversary is a game designer. Brilliant! I can see Vox pleading desperately with you not to throw him in that briar patch now.

    Why, yes, VD is (or at least was) a game designer. One of VD’s games is The War in Heaven, a 1999 offering from Valu-Soft, which was given a 50-out-of-100 rating by Game Vortex, and 20-out-of-100 by Absolute Games. One reviewer of said game wrote”:

    In order to impede your progress and generally frustrate you, the designers have placed enemies in the game. … They are equipped with deadly weapons (sticks) and advanced artificial intelligence so complex that they are capable of either walking towards you OR walking away. Sometimes the enemies choose to run in place or get stuck in doorways… Occasionally enemies will disappear too, which is definitely an added bonus that kept me “on my toes.” The game designers also opted to throw in more “difficult” versions of the same enemy. The difference between a normal badguy and its more difficult brother is their skin hue, and the fact that it takes three minutes of holding down the “attack” key to kill the difficult guy, as opposed to the normal two. After playing this game for many hours, I have discovered the optimal attack plan, which you should follow when encountering any kind of enemy in this game:
      1) Approach opponent by walking towards it in a straight line. They will do the same.
      2) Hold down the “attack” key.*
      3) Wait until enemy dies or game crashes.
      4) Repeat instructions until everything on map is dead.
    * Advanced tactic (for experienced players only): try moving to the right occasionally. It doesn’t really help, but you end up a little bit to the right of where you originally were.

    VD is also a hardware designer, who bears primary culpability responsibility for the fabled 18-button-plus-scrollwheel-plus-joystick input device known as the Warmouse.
    Yes, I’m sure VD’s wide experience, and vast levels of expertise, should give us all pause.

    And if I were a statistician for one of the groups who changed the rules repeatedly for what constituted AIDS…

    Huh? Where did that come from? Oh, wait, it’s National Nonsequitur Fortnight (motto: “We eat pizza with motor oil!”). Okay, in that case, I respond: In any qudrilateral, the longest side is opposite the shortest point. Unga bunga!

    Other than that, my own plan is to compare all finalists to Dune, The Moon is a Harsh Mistress, Dorsai!, Way Station, Lord of Light, and Ringworld…..and vote No Award a LOT.

    [nods] A perfectly valid course of action. I wouldn’t do that myself, but [shrug] if that’s what floats your Hugo-voting boat, who am I to say thee nay?

  11. I realize that it’s so boring to accept someone’s real-life experiences (as a four-time Hugo Administrator) with the matter when it doesn’t match up with the theoretical discussions online on Making Light, but the real world is the one we’re stuck with.

    When we get real-world data we can ground these discussions in the real world. I’ve put in a request and am waiting to hear back.

    Until then, has a Hugo administrator ever fully documented the process that occurs when nominations and votes are counted? I’m a programmer who’d love to see how volunteers and programmers have dealt with issues such as the normalization of a nominated work.

  12. The difference between a normal badguy and its more difficult brother is their skin hue …

    Which skin color in Beale’s game was more bad?

  13. @Mark Dennehy: Thanks for keeping me honest.

    The Gibbard–Satterthwaite theorem states that, for three or more candidates, one of the following three things must hold for every voting rule:

    1. The rule is dictatorial (i.e., there is a single individual who can choose the winner), or

    2. There is some candidate who can never win, under the rule, or

    3. The rule is susceptible to tactical voting, in the sense that there are conditions under which a voter with full knowledge of how the other voters are to vote and of the rule being used would have an incentive to vote in a manner that does not reflect his or her preferences.

  14. Any system can be gamed in theory (as long it has more than 2 and less than infinity possible outcomes, and in many cases even if not). That’s called the Gibbard-Satterthwaite theorem (Gibbard’s proof is nicer).

    But yes, the circumstances where EPH is gameable are unlikely to arise in practice, and even more unlikely to be knowable by the voters. Basically, it involves bullet voting to raise a work from 6th place in points (when only 6 remain) to 4th place then, without dropping any allied works to 6th place in nominations.

  15. Curses; beaten to the punch.

    Note that G-S applies to multi-winner systems too. Just consider every possible set of winners as a “candidate”.

  16. Um… If the data isn’t being fully cleaned as standard, doesn’t that ever lead to problems?

    ETA: Brain is a bit futzed from poking the 1941 Retro Hugo suggestions (halfway through alphabetisation, rest will have to wait until tomorrow), so excuse me if that’s a silly thing to say.

  17. To point up the fallibility of human cleanup: in 2009, human talliers didn’t manage to realize that “Captain Britain and MI13: Secret Invasion” and “Captain Britain and MI13” were the same thing, so Paul Cornell lost out on a nomination. (Kevin Standlee has noted this one a couple of times.) So it’s a problem under both systems, and it’s not obvious that under EPH it would be significantly worse.

  18. @John Lorentz: FWIW, I’ve pleaded with commenters here to please watch the Business Meeting videos and heed what people who spoke to express doubts about or opposition to EPH carefully, as all of them are extremely credible people who know what they’re talking about. That includes you, Mark Olson, Ben Yalow, Kent Bloom, and probably a number of others I’m forgetting. When all of those very experienced WSFS con-runners suggest caution (and they cited somewhat overlapping but diverse reasons), anyone with even a grain of common sense should listen carefully.

    Your comments have already been enormously helpful, and I hope proponents pay attention. (Yes, I voted in favour, but at the time I said and truly meant that we should use the year between now and MAC2 to decide whether the idea should be jettisoned, amended, or ratified — and urged a yes vote at Sasquan on the basis of best preserving our option to act decisively in 2016, or not at all, as seems best then.)

  19. I myself voted for 4and6 this year, not because I think it should happen, but to preserve maximum flexibility for next years’ deciders.

  20. VD is also a hardware designer, who bears primary culpability responsibility for the fabled 18-button-plus-scrollwheel-plus-joystick input device known as the Warmouse.

    Really? That could actually be…
    http://cdn.ubergizmo.com/photos/2009/12/warmouse-meta.jpg
    …wait, what? Why would you put the buttons there? The ones on the left of the scrollwheel are going to be a bit finicky but the ones to the right, under your middle finger are going to be absolutely horrible to use. Why wouldn’t you just do it right? Razer did…
    http://assets.razerzone.com/eeimages/products/13785/features-led.png
    (Your thumb’s a lot more nimble than your fingers for picking out buttons in an x-y grid like that at close range – it works really well)

  21. I’m a programmer who’d love to see how volunteers and programmers have dealt with issues such as the normalization of a nominated work.

    Um, imperfectly?
    I’ve seen some of the stuff involved with 1984 – there are massive (3 inch) printouts of the nominations, sorted, so they could be cross-checked. The software as well as the hardware was a bit more limited then, so I’d expect that better would be possible. (For one thing, we didn’t have Excel, which really does make it easier to handle this kind of data.)

  22. Speaking as a gamer, I think I can say with some authority that game designers aren’t infallible, and do make horrible, horrible game-ruining errors that a two-year-old could have pointed out. Saying game designer haha I win is as likely to get you anywhere as saying I have a mensa card haha I win. It’s just posturing, and who has time to be afraid of that?

  23. I think maybe the important bit in G-S is ” a voter with full knowledge of how the other voters are to vote”. It’s not enough to know how some other voters will vote, you have to know how all of them will vote. Given how the Hugos work right now, that just doesn’t seem plausible (but hey, the nice thing here is that you can test it – feed data into EPH, try to game it, measure the effects, write it up).

    Second point: In case I came across as dismissive earlier, I wasn’t trying to be – I’m just saying that the technical problems the Hugo administrators are facing with things like data cleanup have been the subject of a few decades of research and we now have tools that could make their lives enormously easier.

  24. Um, imperfectly?

    I was thinking more about technical solutions, like whether the software uses soundex codes to find misspellings of an author’s name. They’re used in genealogical databases because surnames were often misspelled or had close variations.

  25. So, you are putting a mathematical system out there for voting, with very clear rules, and your primary adversary is a game designer. Brilliant! I can see Vox pleading desperately with you not to throw him in that briar patch now.

    Our current voting system is mathematical and has very clear rules too.

    When you picture Beale, is he wearing long underwear, a cape and cowl, surrounded by minions who laugh demonstratively at all his jokes?

  26. Mark Dennehy: In case I came across as dismissive earlier, I wasn’t trying to be – I’m just saying that the technical problems the Hugo administrators are facing with things like data cleanup have been the subject of a few decades of research and we now have tools that could make their lives enormously easier.

    I would like to say this myself. I have a lot of admiration and respect for Mr Lorentz and Hugo administrators past and present. It’s an incredibly difficult job — and I applaud the dedication, time, effort, and integrity they have all brought to the job. I know that they want only for the Hugo Awards to be the best program possible. Their concerns are valid, and should certainly be listened to and taken on board.

    But we also have some amazing tools available to us to reduce the workload, leaving more time for the human element (which is still required for true accuracy). The voiced concerns are not to be dismissed — but they’re not cause to consider EPH unworkable, either.

  27. So, you are putting a mathematical system out there for voting, with very clear rules, and your primary adversary is a game designer. Brilliant! I can see Vox pleading desperately with you not to throw him in that briar patch now.

    Some of my best friends are game designers. (Are they important game designers? No, no they most definitely are not. Then again, neither is VD.) This sort of pronouncement tends to make me think someone is more of a fan than a gamer. Part of what’s assessed in beta testing is what happens when people try to break a game and use it in ways its creators didn’t intend. In MMOs, especially, that monitoring continues long past the testing stage. When people are observed using the game in mischievous or malicious ways, that means it’s time for a patch to close up whatever loopholes they were exploiting.

    Does that mean that people exploiting the loopholes go away? Of course not. Some of the people who merely found one amusing might find other things to do once it’s closed, but people exploiting something for money or the game’s equivalent of power will generally keep looking for new ways to do so. The game continues watching for them. I suppose VD would say that this means that the gold sellers and the hackers win, but there are all kinds of long-running, well-managed games that have been operating for years where it’s clear the game is ahead of people trying to exploit it.

    I doubt VD would believe this, but he’s not the only person in the world who’s able to evaluate problems in this manner. Most of EPH suggests to me that there are lots of people in fandom who can do the same. He’ll take some time and find some other way to be disagreeable, and if necessary, others will work to counter his effects.

  28. Many of the BM objections to EPH seemed to boil down to two things: “we only have a single year of effective slating and we don’t know if this will ever happen again” and “it would cause too much extra work for the administrator”. I don’t really see any merit in the first argument (VD has said he’ll be back, while Kate Paulk’s planned reccomendation list looks an awful lot like a badly-disguised slate) and if we have tools to mitigate the second, we should certainly use them.

  29. I doubt VD would believe this, but he’s not the only person in the world who’s able to evaluate problems in this manner.

    VD is the poster child for Dunning-Kruger syndrome. He is an utter and complete incompetent failure in any field that he every stepped into (16 button war mouse????), he was given money from his wealthy father who is now a jailed felon for literally giving people criminally bad advice (look it up) and VD hightailed it to Italy where he safe from some nasty questioning should he return.

    He is not some evil genius and the only reason he has a voice is that there are people out there more incompetent and stupid then he.

  30. The Warmouse!

    hahahahahahahahahahhahahahahaaahhhhh!

    Kids, you might not remember how much the internet made fun of that at the time. But everyone did. Gamers and non-gamers alike. It brought a lot of people together. Most of the non-gamers thought it was a shopped picture. So did people who didn’t follow gaming news.

    It had the button equivalent of two Chapter 5s.

    And “Valu-Soft” says it all right there.

  31. hey’re used in genealogical databases because surnames were often misspelled or had close variations.

    Soundex helps, but it’s a little too fuzzy sometimes. And can be a pain to use, if not all your people are using English-approximate phonetics. (Like, for example, the people I have in my database who are Quebecois. Or Norwegian. I’ve been doing genealogy since before I met fandom.)

  32. “we only have a single year of effective slating and we don’t know if this will ever happen again” and “it would cause too much extra work for the administrator”

    The first argument fails because this was SP3, and Correia’s attempt in 2013 was at least the third try at gaming the nominations this way.
    The second one suggests that the admins need to re-evaluate the way they’re doing it now, and, for example, consider doing cleanup as they go. Cleanup is always going to be needed. (Here I’m speaking as someone who has done data extraction, entry, and QC on a very large database (more than 4 million records), and QC on various other forms of data, for most of the last 30 years.)

  33. Mike Glyer on September 1, 2015 at 2:28 pm said:

    Nicholas Whyte: Sasquan’s Glenn Glazer said they would do it.

    And while Glenn is part of the Hugo Administration Subcommittee, he’s not the person who will actually have to create the files. (Also, Glenn has been on a driving vacation since just after Sasquan. I think he gets home tomorrow.)

    Mark Dennehy on September 1, 2015 at 2:50 pm said:

    I’m not sure what the US equivalent is to the Irish Data Protection Act,…

    I do not think there is an equivalent.

    May Tree on September 1, 2015 at 6:03 pm said:

    Why is a single vote split into fractions preferable to, say, giving everyone five votes and letting them apportion those votes as they see fit,…

    Others who are advocates for EPH can explain this more in depth, but such a “vote token” system strikes me as more subject to slate voting, not less.

    But the fact that some have looked at EPH and concluded that it gives each nomination 1/5 of a vote and possibly that you can have more effect on the result by only nominating one work is evidence of a system that is so complicated that it’s confusing people. But OTOH, it’s also clear to me that to many people, anything other than “first past the post, everyone vote for one and only one thing and whoever has more votes than anything else wins even if 85% of the voters chose something else” is all they can understand. It’s frustrating in many ways.

    John Lorentz on September 1, 2015 at 6:45 pm said:

    Any attempt to determine how well EPH works with real data needs to actually work with real data–in other words, with all the different spellings that people use when they nominate.

    Agreed. I thought so from the beginning. People have to have a realistic understanding of how difficult the task is. IMO, most of the theorizing has been by people working on perfectly normalized data, and they consider the normalization stage trivial. It’s not.

    Administrators don’t clean up every single entry. There’s no reason to do so. They only clean it up when it looks like there’s a reasonable chance that the work will get up near the final ballot, and while they make mistakes, I doubt that any computerized normalization algorithm will catch the 2009 case mentioned above that cost Paul Cornell a Hugo Award finalist slot. The theoretical modelers are expecting perfectly normalized data down to the individual entry.

    Mark Dennehy on September 1, 2015 at 7:14 pm said:

    You have to clean up 100% of the data all the time.

    You obviously have never actually administered the Hugo Awards. Saying this tells me that Sasquan really needs to release very raw, very un-normalized data, so that everyone who is convinced that data normalization is trivial can spend the next few months fighting over it and never actually getting a chance to run the data through the EPH algorithm because there will never been 100% agreement upon what sort of data normalization algorithm to use.

    (And note that I’m a database programmer in my Day Jobbe, usually dealing with large quantities of geographic information, and the data is never normalized and is a huge pain in the neck to keep clean.)

  34. Kevin Standlee: Saying this tells me that Sasquan really needs to release very raw, very un-normalized data, so that everyone who is convinced that data normalization is trivial can spend the next few months fighting over it and never actually getting a chance to run the data through the EPH algorithm because there will never been 100% agreement upon what sort of data normalization algorithm to use.

    The 1984 data gives a good idea of what sorts of variations one can encounter, a similar example

    Title
    The History of WSFS Business Meetings
    History of WSFS Business Meetings, The
    History of WSFS Business Meetings
    The History of WFSS Business Meetings
    History of Business Meetings
    WSFS History
    Business Meetings History
    The History of SFWA Business Meetings

    Author
    Kevin Standlee
    standlee
    Kevin A Standlee
    Kevin A. Standlee
    KA Standlee
    K A Standlee
    K. A. STANDLEE
    Kelvin Stanley
    <no author to go along with Title>

  35. Author
    Kevin Standlee
    standlee
    Kevin A Standlee
    Kevin A. Standlee
    KA Standlee
    K A Standlee
    K. A. STANDLEE
    Kelvin Stanley

    Say, what ever happened to that Standlee dude? Haven’t heard anything from him in a while.

  36. the data is never normalized and is a huge pain in the neck to keep clean

    You have my complete sympathy and understanding.
    I worked for a utility company, starting with being a peon data-entry person, helping to stuff one of their databases, where we had standardized ways to enter everything. (Which didn’t fix everything: Hey, Merle, is 18 inches one foot or two feet?) I was very pleased, on getting a dataset that had been extracted some 20 years later, to find it was still about 90% reliable, and most of the problems were things that could be fixed by better training. (It matters when it’s the database used to find the utility’s meters.)

  37. If the first data being presented is full data including spelling variations (even separated by categories with different pseudo-IDs for the same person in different categories) it probably makes sense to have that be a single disclosure to Jameson with an NDA (and other conditions if needed). That allows testing of the normalization software and will allow estimation of how much human time is needed for checking. But there can be other datasets.

    Suppose Jameson creates his computer-aided cleaned/normalized data and presents the Sasquan people two derived datasets, one of the full normalized data by category and another with nominees that have few nominations and do not reach the later stages of the calculations replaced by random codes. They can then look at them and decide whether the latter is sufficiently deidentified for wider distribution to others who want to confirm Jameson’s results. (This is off the top of my head. Some modification of the above might be better.)

  38. @Mike Kerpan:

    Many of the BM objections to EPH seemed to boil down to two things: “we only have a single year of effective slating and we don’t know if this will ever happen again” and “it would cause too much extra work for the administrator”.

    You’ve just given what one might call the CliffsNotes renditions of what John Lorentz and Kent Bloom said. (That is not an objection. I’m just saying you boiled down what they said to ultra-terse representations.)

    However, there were also other very thoughtful observations, such as Mark Olson and Ben Yalow arguing, in different ways that in present circumstances a rush to amend the nominations process could very easily look like changing the rules merely because the Business Meeting regulars didn’t like the outcome (that was look, not ‘be’), and also that adopting a complex voting algorithm as a direct replacement for a simple one is poorly compatible with WSFS’s goal of transparency.

    Now, please, I ask that people kindly not debate my after-the-fact effort to do justice to arguments made by people whose position I voted against. I cannot hope to have succeeded very well in conveying what Mark and Ben said. Nor, to my knowledge, are they on File770 to voice their view in their own words. On the other hand, you can watch the Business Meeting videos. Transcribe their words, and argue with that, and that would be fair.

    I know that just stating the CliffsNotes renditions doesn’t suffice, because I tried while posting here during the Business Meeting sessions. It does not in any way do an adequate job.

  39. From where I sit, some kind of anti-slate measure is pretty much an existential necessity for the Hugos. If the Hugos are ever reduced to a battleground between warring slates, kiss ’em goodbye; if fandom must needs No Award most-to-all of the ballot in order to avoid giving recognition to slate-drived garbage and keep on doing so year after year after year… that, too, is death (of a different kind) for the Hugos.

    Up to now, the main anti-slate measure has been fandom-at-large. More specifically, the more-or-less universal consensus that Slates Rilly, Rilly Suck. And the Hugos have gotten along quite nicely over the past 60-odd years with just fandom’s unofficial, informal ‘social sanctions’ hammering on anybody who was asshole enough to try slating their way to a Hugo (Black Genesis, anyone?). But this is a purely social defense, isn’t it? The disapproval of fandom-at-large cannot dissuade any would-be slatemonger who doesn’t give a shit what fandom-at-large thinks.

    Enter: Theodore “Vox Day” Beale.

    VD clearly doesn’t give a tinker’s damn about any consequences of a purely social nature. VD slated the living shit out of the Hugos this year, and it’s only sensible to act on the assumption that VD will continue to slate the living shit out of the Hugos until he gets tired of it, or until he runs out of useful idiots people who are willing to be his minions, whichever comes first. Given that VD’s observed behavior indicates he’s ready, willing, & able to hold onto a grudge for an arbitrarily large number of years, it would be pretty damn stoopid to think VD will get tired of slating the Hugos any time in the foreseeable future; given that VD has willing minions, despite his obvious toxicity, it would, likewise, be stoopid to think he’s going to run out of minions any time soon.

    Is anybody out there willing to bet the Hugos that VD is ever going to give up on being… well… VD?

    Is anybody out there willing to bet the Hugos that VD is the last slatemonger that will ever impinge upon fandom?

    I’m not willing to take either of those bets, myself. Like I said above: Some kind of anti-slate measure is pretty much an existential necessity for the Hugos. And this year’s Hugos are a concrete demonstration that the Hugos need anti-slate measures which are not, in any way, dependent on the presumption that ‘nobody wants to be That Asshole’.

    Yeah, I’m ignoring the Sad Puppies. For one thing, the 2015 Sads were the ultimately forgettable ‘opening act’ to the ‘headliner’ that was VD’s 2015 Rabids. For another thing, the Sad Puppies’ track record up to now can be summed up as “Double down on everything from last year!!!”, so if the Sads keep on keepin’ on, it’s just a matter of time until they become, to all intents and purposes, VD’s Rabids with the serial numbers filed off and a fresh coat of paint. In short, the Sads are irrelevant.

    Now, I’m not particularly concerned about which particular anti-slate measure(s) end up being implemented. I do like E Pluribus Hugo, but I’m definitely willing to listen to any alternative anti-slate measure which is at least as effective as EPH at de-fanging slates, and/or does no more collateral damage to the procedures and protocols which have grown up around the Hugos. I just haven’t heard of any such alternative anti-slate measure. Sorry, 4/6 supporters, but from where I sit, 4/6 fails on the “at least as effective” criterion…

    Anyway.

    John Lorentz has been talking about how the data renormalization thing means that EPH will add to the workload that Hugo administrators must shoulder. If Lorentz only means to say that the additional workload is a factor that must be taken into account before one decides to support or oppose EPH? I get it. I might quibble over the detail of how much additional work EPH entails, but if EPH does entail more work for the Hugo admins, it entails more work for the Hugo admins. [shrug] I just happen to think that EPH is worth that extra workload—see my first paragraph for details.

    If Lorentz means to say that the additional workload will be sufficiently onerous to justify rejecting EPH? Well, I disagree, for reasons already stated.

  40. For any idea what might be involved, just try to track all the different ways authors and books were referred to in the last few months on File770 (And is there a space in that?). I saw people get titles wrong in Kyra’s brackets, where it was right there in front of one multiple times. People have incorrectly cited the names of their favourite books, and not just forgetting or adding an indefinite or definite article . (Even God Stalk isn’t immune, and not thanks to renaming in republication. Is it one word or two? How many people now think the exclamation point is part of the original title, and not File770 enthusiasm?). Yesterday someone called Kary English “Kary Fisher”. (And in nomination data, would that be ID’d as Carrie Fisher or Kary English? The context would have to be very specific, as the former does have fiction out…).

  41. Kevin Standlee:

    “Administrators don’t clean up every single entry. There’s no reason to do so. They only clean it up when it looks like there’s a reasonable chance that the work will get up near the final ballot, and while they make mistakes.”

    Actually, the same goes for EPH. If we only count the highest votes (lets say they get 150, 130, 110 and 90), we can conclude that the most a slate can dillute a vote is to 0.2 instead of 1. We take the smallest of the top five and divide by five. Then we get 18. Nominees with smaller number than 18 can’t win, even if all top five are slates.

    So no clean up needed there. I guess it is possible to calculate an even better smallest cleanup number needed. What I wanted to say is only that 100% cleanup is not needed.

  42. John Lorentz has been talking about how the data renormalization thing means that EPH will add to the workload that Hugo administrators must shoulder.

    I think he’s mistaken; the normalization for EPH should be identical to the stuff done already, as EPH is part of the counting method (think of it as an extra set of loops, run in sequence; it’s quite similar, in the elimination part, to what’s done with the final ballot). If the admins are already overloaded with the normal cleanup, they might need to change the way they’re doing it to reduce the load.

  43. I also don’t see many people belittling the idea of data normalisation, so much as the following
    1. Data nerds trying to volunteer to work on the problem if they had some data to work with
    2. Surprise on the amount of manual work being done, either through lack of knowledge of the amount of noise in the data or a lack of using fancy tools by the adminstrators
    3. People surprised that the data isn’t normalised already.
    4. People working on the calculating end instead of the normalisation end simply because that’s the part they can actually work on right now. Without data, you can’t normalise data!

    I think many people would love to see the actual raw data as well as the normalised data, just to work on the normalisation problem. Fen work on strange things for fun.

    Given that John Lorentz has stated he doesn’t want to clean up the typos (suggested in order to aid anonymity), I hope to see that data available in the near future. File off the serial numbers, randomise the ID by category, and ship it. I believe people have even volunteered to help with this part, too!

  44. The debate appears to have grown and tailed off overnight (for me) so I’ll just say that I want to give a round of applause to firstly the people who worked hard on EPH this year, who have had plenty of plaudits, but more importantly to the Hugo administrators and other volunteers whose work over the decades is perhaps a little bit unsung.

  45. @John Lorentz

    As I see it the main reason for running the data for this year through EPH is to see how it affects the final ballot. Raw un-normalised data isn’t needed for that.

    Obviously you have concerns about workload of data-normalisation. However at the moment this is a black-box to many (all?) of us. Is the current process completely manual? Is any of the normalisation done by software (and then maybe hand checked) or is it computer-assisted in any way? Although to develop such systems a developer would need access to the raw un-normalised data (although just the category and the title would be enough).

  46. Data normalization. Hmmm.

    Just for grins, let’s say that the Sasquan concom took the raw nomination lists, and extracted data therefrom to create 2 (two) lists which get released to the public:

    List 1, consisting entirely of all the “Name of author/creator” entries, in alphabetical order, and nothing else.

    List 2, consisting entirely of all the “Name of nominated work” entries, in alphabetical order, and nothing else.

    At first blush, it seems to me that those two lists would suffice to give people a concrete data-set on which to base arguments re: how much work this data-normalization stuff decently is—and it would also be helpful for people who’d like to help create tools to assist the Hugo admins in their data-normalizaion duties. It also seems to me that these two lists, being utterly divorced from any indication of Who Nominated What, would not violate anyone’s privacy.

    Does this strike anybody with actual, you know, Hugo admin experience, as a thing that might be worth doing?

Comments are closed.