Michael J. McDonagh

An established writer who recently went to work becoming an author, trying valiantly to make someone give a damn and chronicling the process.

The Spritz or teh Shitz?

“Yea, yea, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” — Ian Malcolm, Jurassic Park.


  The developers of “Spritz” tout it as “an insane new app” that allows you to read 1,000 words per minute. I was somewhat surprised that the app’s developers and I have precisely the same assessment of the technology, since I’m not a fan and they are the people who are trying to sell it. First, a definition:


inˈsān/ adjective 1. in a state of mind that prevents normal perception, behavior, or social interaction; seriously mentally ill. “certifying patients as clinically insane”

synonyms: mentally ill, mentally disordered, of unsound mind, certifiable.

This is a freaking milestone for truth in advertising. At least if you take it at face value. I’m sure they mean “insane” as in “check out how insane I am with my skateboard and bong full of Redbull.” In other words, “We’re a bunch of middle aged dudes who realized our target audience is our kids’ ages, so we’re trying to talk all hip and cool, like Shaun White or something, like the marketing consultants told us to.” First, there is nothing sadder than a dude who looks like this:


“Wooooah, Dude, this is totally freaking insane!”

. . . talking like that. I know, because I look a hell of a lot more like the guy in that picture than whatever caricature of an awesome hipster stoner dude he’s trying to channel.  It’s still funny, though, because my biggest criticism of Spritz stems from the fact that it is insane — it quite literally, forces you to read “in a state of mind that prevents normal perception.”


First, what is it?

Spritz is a concept that takes some of the science we covered in What the Reader Really Sees on the Page, and ignores the rest of the science behind how we read, in the name of reading faster. The app flashes text on the screen, one word at a time, at rates up to 1,000 words per minute. It looks like this:


First off, that is 500 words a minute. Although they claim their mentally imbalanced insane app lets you read twice that fast, none of the posted examples actually go that high. They go half that high, which should probably tell you something –particularly if you find 500 WPM version as annoying as I do. There is no fucking way I could be bombarded like that for two or three hours.

But, that’s what it is (at 50% power, anyway).

What do they claim?

The Spritz people make all sorts of claims on their website and in their marketing materials. Their explanation is summarized on their webpage, which states:

Traditional reading involves publishing text in lines and moving your eyes sequentially from word to word. For each word, the eye seeks a certain point within the word, which we call the “Optimal Recognition Point” or ORP. After your eyes find the ORP, your brain starts to process the meaning of the word that you’re viewing. With each new word, your eyes move, called a “saccade”, and then your eyes seek out the ORP for that word. Once the ORP is found, processing the word for meaning and context occurs and your eyes move to the next word. When your eyes encounter punctuation within and between sentences, your brain is prompted to assemble all of the words that you have read and processes them into a coherent thought. When reading, only around 20% of your time is spent processing content. The remaining 80% is spent physically moving your eyes from word to word and scanning for the next ORP. With Spritz we help you get all that time back.

Are their claims true?

Some of the words they’re throwing around probably sound familiar if you read this blog. Much of what they are saying, when it comes to the process of how we read, is absolutely true. We move our eyes in saccades, hopping from word to word. We have a focal point (where your fovea is pointed). Our eyes find a point in the word that allows our brain to best process it (what they are calling the “ORP”).

Much of what they’re saying is blatantly misleading, though. For starters, it’s not like you have time to order a pizza while your eye is looking for a place to focus (the “ORP”) in the next word. Your brain already found it and told your eye where to go while you were reading the previous word. That’s the reason we see seven or eight letters ahead of the four letters we are focused on, regardless of whether we read left to right (e.g., English) or right to left (e.g., Hebrew). If the word is exceedingly familiar and a few letters long (like “and”) our fovea will never rest on it, it’s already moved to the following word.

I have absolutely no idea where they came up with the “20% of our time reading is spent processing content, and 80% is spent moving our eyes” thing. I tried to find a study supporting that (the Google machine brings back a lot of Spritz shitz, and nothing else). But the claim is idiotic just on its face. I know, because I read, and the entire time I’m reading, I have a little movie going on in my head. I haven’t had 20% movie, 80% waiting for the next frame of the movie since I stopped having to sound out every freaking word when I was five years old.

While it’s true the rate of eye movements is absolutely the gating issue when it comes to reading speed, they apparently didn’t realize that we developed our alphabetic/syllabic system of writing to function with the way we process words. We do so through the auditory faculties in our brains, which have only known “language” as anything but sound for between 1% and .0025% of the time our species has had language. So I think the Spritzheads have the concept completely backward — our ability to read is not limited by the system we created for communicating in writing, we developed a system of writing that mirrors our ability to convert letters into syllables, syllables into ideas, and “hear” what the hell is going on in the story.

This whole “we spend 80% of our time waiting for our eyes to move” concept is pure bullshit, and completely ignores the way the brain is planning the next saccade while the fovea is focused on a prior word. Something that spritz stops your brain from doing.

They make a slew of other claims I’m willing to call bullshit on, too.

  • This is not “new.” This technology has been around since the 1970s. The biggest innovation Spritz has to offer is hype.
  • Humans can talk faster than we normally talk, too, but we speak and, not coincidentally, read at a rate of 200-300 words per minute, which is the maximum rate at which we can communicate in any fashion without diminishing comprehension.
  • Speaking of comprehension (and speaking of complete bullshit), they claim “studies have shown” there is no decrease –and may even be an increase– in comprehension with Spritz. They don’t cite a single study, though, and thousands of studies have been performed on comprehension. Every reputable study concludes anything faster than 300 WPM comes at a comprehension cost.

Spritz is hyping itself by making claims about how humans read that blend a little bit of science with a whole pile of bullshit to offer a product that has basically been available since eight-track tapes were a thing. They also have to ignore most of the science behind reading — particularly the fact that our brain is processing what the next word is subconsciously while we are reading the prior word — in order to justify their product’s existence. You simply cannot read faster than our writing system is normally read without suffering a decrease in comprehension. You can’t even listen to someone talk faster than that without having the same problem.

Our language forms (written and verbal, including every language in use on the planet) are limited by, and have basically developed to work optimally with, our brain’s ability to process and comprehend. Can you throw more words at someone? You bet. Our ears and eyes can take in hundreds of times more words than our brains can process. If I don’t give a shit about comprehension, I can glance at a whole page of a book at once, while ten people are talking and the TV is on. If, on the other hand, I plan on comprehending the words I am presented with, then 300 is about my max — and yours, too — for any given minute of either of our lives.

Is Spritz worth a shitz?

Not really. There’s a reason this technology went the way of the bell-bottomed pants, eight-track tape, and pet rock. for one thing, not all saccades are forward. While the movie in our head is playing at 300 WPM, our eyes will sometimes double-check something we already read. And since we read, talk, and think in what is essentially real-time, reading a novel three times that fast would be –quite literally– like watching a movie played in fast forward.

If your goal was to just get through Moby Dick so you could say you read it, but had no intention of enjoying it, I guess it could work. I can’t imagine being thrilled you just crushed every poem Maya Angelou ever wrote during one lunch break, but that’s effectively what Spritz has to offer. You’d have a level of comprehension somewhere between someone who really read the book and someone who just lied and said “Yea, I’ve read Moby dick,” but you could still claim you read it.

If you were reading something you actually need to comprehend, like I do at work all freaking day, I would strongly caution against it. There is just no science that supports the idea that a normal human can comprehend faster than we normally speak and read.

If you were reading something for pleasure, well you should do whatever the hell you want, because what you do for pleasure is up to you. And it could be kinda funny to read 50 Shades that way — just to see all that undressing and screwing happen at three times normal human speed. The lack of retention and comprehension would be an added bonus there, too.

Ultimately, if your goal is to have more than a vague recollection that you read something and the right to say (at least partially) truthfully “I’ve read that book,” Spritz doesn’t really bring anything worthwhile to the party. For better or worse, we talk and read and hear and comprehend at a rate of about 300 WPM, max. Anything above that either comes at a cost in comprehension or is, literally, too good to be true. If that’s even good. Since I like the little mind movies that happen when I read — and don’t mind the fact that they aren’t all in fast-forward, I’ll just call it bullshit.

The Post in Which I Answer the Question: “What’s With all the F-Bombs?”

I’ll start with the cliché about the leopard not being able to change its spots. That doesn’t have anything to do with my frequent use of profanity. It explains why, when I sat down to say “here’s why I like to occasionally say ‘fuck,’” I lost an hour of my day reading fascinating articles written by linguistic anthropologists about that and similar words.

None of which have a fucking thing to do with the topic at hand.



The F-Bomb and Me, a personal history

I have two uncles on my dad’s side of the family. One was a contractor, the other was the bartender at the Irish Center in San Francisco. Both were Irish immigrants and, as far as I know, neither ever uttered a sentence that didn’t contain at least one F-bomb. That doesn’t explain anything about my use of such language, it just shows how I was introduced to it – probably in conjunction with my initial language acquisition skills as a toddler. “Fuck,” “fucking,” “motherfucker,” and “cocksucker” were what my uncles said instead of “um.” If they otherwise would have said “um” a lot.

[They also got me drunk the first (several) times and I tend to slip into an Irish brogue if I’ve had too many, though that has nothing to do with the topic at hand.]

My parents, on the other hand, do not cuss. I don’t remember having a conversation with my parents about my uncles’ version of “um,” although I’m certain I did. I would remember being sent home from kindergarten for asking some cocksucker to pass me a motherfucking crayon, and that did not happen.

Some years later, when I was around ten, my friends and I discovered those words anew, peppering our sentences with them as liberally as my uncles ever had. Of course, that was only when we were alone, unobserved, and certainly far, far away from our parents’ ears. I’m sure we did it to impress each other and younger kids, to feel “grownup,” and for a host of other reasons that tend to evaporate after dropping ten or twenty thousand F-bombs.

By the time I was in high school, in the right company and circumstances, I wouldn’t hesitate to use profanities for emphasis. For the next ten years or so, those lines were primarily generational. I seldom swore in front of someone my parents’ age, but had no problem doing it with someone my age or younger. Circumstances matter, too. I wouldn’t drop an F-bomb in front of anyone if I was, say, in a church, but I’d probably be willing to say “shit” on a racquetball court or by a campfire even if my companion were Mother Teresa.

This all seemed natural, and I never gave it any thought. Then I had kids.

Suddenly, I felt an overwhelming need to censor my language in front of not only members of my parents’ generation, but also my children’s. Which is ironic as hell, because I will never have as many conversations with anyone about the subjects of shit and piss as I’d had with each of my children by the time they were three. Granted, the vernacular was different (“potty,” “tinkle,” “poopie,” etc.), but shit is shit, whatever you call it, and we were literally talking shit to each other several times a day for years.

Cussing at the Office

Around the same time I was constantly talking shit, er, poopies, with my kids, I was also earning my chops in my professional life, where I was introduced to cussing at a different level. First, becoming a “grownup” meant that people ten, twenty, or forty years older than me were now my peers. I was practicing law, which meant I had to at least pretend I was the peer of every opposing lawyer I dealt with, even if he (and the ones that old were all “he”) was forty years my senior. Being the frustrated linguist I really am, that’s also when I started paying close attention to how people were using swear words. I noticed that people who cussed in this context fell into three groups:

  • Buster Blowhard. He’s one tough motherfucker. You know this, because he is constantly saying what a tough motherfucker he is. He might as well have “Super Insecure and Overcompensating” tattooed on his forehead. I say “he” because, while I am absolutely certain there are female versions of this, I have not done business with one yet.
  • The Casual Cusser. Talks to everyone (or at least most people) like they’re all in a high school gym together. Takes no offense to profanity also assumes you don’t give a shit. Doesn’t really put any thought into it.
  • The Strategic Swearer. Appears not to use any profane or inappropriate language whatsoever. When it’s time to call bullshit on something, the word “bullshit” silences a room.

I’m sure it comes as no surprise to any who reads this blog that, among my friends, I am a Casual Cusser. Professionally, though, I am squarely in the Strategic Swearer group. So much so, that most people who only know me professionally may be inclined to think I don’t swear at all.

While I’m a Casual Cusser much of the time, I have to admit, the Strategic Swearer is BY FAR more fun. Swearing is all about how much power we give words, and being the Strategic Swearer lets me manipulate them like a power-mad comic book villain.

My favorite example is a deal I’d worked on for six months, never venturing south of the word “darned.” A new lawyer came onboard with the other side and started trying to jerk things around. After three days of this, I stood up and told him he was “pissing all over everything we had worked on for six months.” Then I told his client to contact my client directly if he was more interested in doing the deal than playing “bullshit games.”

Before the meeting, I told my client “start getting ready to walk out if I say the word ‘piss.’ If I say the word ‘shit,’ stand up immediately. Don’t talk to anyone.” He did, we left, and before the elevator arrived to take us downstairs, the deal was back on track. If I’d been saying “shit” this and “fuck” that for the prior six months, those words would have had almost no power. Coming as they did, though, they were powerful enough to make the person representing the other company go – quite literally – pale.

As I watched the blood drain from his face, all I could think was, If I said he was tinkling on the deal and they were playing games with cow poopies, IT WOULD HAVE MEANT THE SAME FUCKING THING.

Where That Power Comes From

It would have meant the same thing — and it wouldn’t have at the same time. That’s the amazing thing about swear words.  Their context is their meaning. The meaning of any given swear word happens somewhere between: (1) the speaker’s use of the word and (2) the listener’s feelings (a) about the word generally and (b) how the word is being used at that moment. As writers, we can look at it as the ultimate exercise in usage and cognitive construction, because the true meaning to the listener does not have one fucking thing to do with the literal word we are using.

You can see the same thing on the opposite end of the spectrum, too. We have a huge Mormon population where I live. They never (ever, which is to say, at least not when another Mormon is around) say the word “fuck.” Which makes sense, because Mormons are notoriously proper, well-mannered people (particularly so if another Mormon is around). Go watch a Mormon basketball game – don’t ask me, basketball seems to be a significant aspect of their religion. You’ll hear the word “screw” and “screwed” thrown around with abandon. And it’s being used exactly when and how the F-bomb would be dropped by someone comfortable with dropping F-bombs.

They say a word that means the same thing. They say it in the same context. They say it with the same intent. The only fucking difference is the significance they have subjectively given that word as far as it’s “badness.” Fuck is bad because – and only because – they have decided it’s bad. Screw, which fucking means “Fuck,” for fuck’s sake, is fine, because — well, it’s not “Fuck.”

And I don’t mean to pick on Mormons, here. They’re just a convenient example. The same is true for all of us. Or, should I say, Every Fucking One Of Us. There’s nothing wrong with it. We have the friends we tell “I’ve gotta take a piss” and the friends we tell we “need to go to the bathroom.” There are people we ask for the “restroom,” and we may tell a three year old we “need to go potty.” Almost all of which we do without thinking twice – it’s a natural part of our language.

So, why do I cuss on this blog?

Because you’re the friends I tell “I’ve gotta take a piss.” 🙂

Properly used (if that isn’t an oxymoron in this context), I think swear words are a more effective way of placing emphasis than the main alternative, an exclamation point. For me, they are also the more honest – this blog is about the most unfiltered (and unrefined) version of my “voice” imaginable. This is what I sound like in my internal monologue and when I am speaking to my closest friends. In other contexts, there is some form of filter – usually so ingrained it’s subconscious – making decisions about the propriety or utility of those words.

Which is one of the reasons I think I love blogging so much. In here, I don’t have to give a fuck.

This is Your Brain on Words Part Three: What the reader really sees on the page

“To see is to devour.”

― Victor Hugo, Les Misérables

Our Eyes Weren’t Designed for Reading

Evolutionarily speaking, eyes are a pretty big deal. Oxford zoologist Andrew Parker has even posited, in his Light Switch Theory, that the development of eyes set off an evolutionary arms race (what we now call the Cambrian Explosion). It makes sense – a predator who can see is going to make pretty short work of most of its blind prey. It’s going to do a hell of a lot better hunting (i.e., outcompete) other predators who are blind, too.

Making a long story (by which I mean hundreds of millions of years) short (this paragraph), light sensitive spots – which even plants have – grew and changed until animals that couldn’t see only survived if they lived where there wasn’t anything to see –underground, the bottom of the ocean, the Lifetime Movie Network, etc. Of those that can see, different animals have developed different eyes – the kind that work best for those particular animals. Dragonflies can see almost everything around them, they can see in both dark and light, and can see color. They can’t focus well enough to do something that requires seeing more than the outlines of shapes, though, and details within those shapes are indistinct. Which means no dragonfly could ever do something like read this blog. Boo dragonflies. Front-focusing, color-receiving, not particularly good at seeing in the dark eyes – like we have – predate humans being human. We share them with many primates and, interestingly, dolphins. So, yes, a dolphin could read my blog. Yay dolphins. Also, we can move our eyes without moving our heads. That doesn’t sound like that big a deal until you realize it’s more (in terms of sheer numbers, much, much, much more) the exception than the rule.

Spoiler alert, this is where this shit starts getting really cool

We didn’t develop our eyes for reading. We developed them, like everything else, to get food. Well, to get food and to get laid, but getting food was the focus, being both a prerequisite for and the ticket to the latter. So we’ve got these eyes that do a pretty good job of focusing on a specific animal we want to kill, and they also give us enough vision around that animal to know whether something else is hunting it (or us). Then every single ancestor you’ve had since your relatives were primates got laid and, hundreds of millions of years later, it’s time to read.

See, I told you the long story would be short.

How We Read With Hunters’ Eyes

You may think you are looking at a page on your monitor right now, but as soon as I mention it, I’ll bet you’ll agree that you are really just looking at one spot – like the word “spot” – and taking in the rest of this page as a progressively blurrier bunch of stuff. That small spot of clarity (and the relative clarity of what’s around it) is key to understanding how our eyes take in words.

The “sweet spot” in our retina, which is to say the spot that focuses most clearly, is called the fovea. It’s a depression in the retina, and it is what you point at the word you are reading at any given moment.


Reading takes place through a series of snapshots, called saccades, taken several times a second. During each snapshot, our fovea is focused on around four letters. We see a total of ten to twelve letters total – four to the left and seven or eight to the right of where we are focused. Assuming, that is, we are talking about Westerners, reading the left-to-right Western alphabet. If you were reading Hebrew or Arabic, for example, you still see seven or eight upcoming letters, but then they’re on the left (because their words read right-to-left). If you were reading Chinese, you would not focus on as many upcoming characters, because character density is completely different. You would, however, be fovially focused on the particular character you were reading, and looking (a bit, and not consciously) at what comes next.

In other words, reading looks like this:


Those saccades are important, because they are not a smooth roller gracefully moving from one word to the next, putting everything in the foveal sweet spot when its turn comes. Saccades happen through a jerky, fast process. During that process, our brain looks for every opportunity to take shortcuts and cheat. Once we’ve earned how to read and don’t have to sound out every word, we also stop “reading” every word. Because the brain is fast, and wants to get down to the story, it will grab the first few letters of an upcoming word while still focused on the previous word. If there’s a pretty good chance the next word is “the” or “its” or something it can assume it knows, the next saccade will shoot past that one, grabbing the last letter or two subconsciously as the fovea is brought to bear on the word after, using that trailing focus to confirm what the word was.

This is Where Our “Written Sounds” Really Shine

As we discussed in Part Two of this series, the base unit of human language is syllables. Letters are smaller, but we use letters solely for the purpose of forming syllables, which are something our brain can turn into word-sounds, and process as language.

Any guesses how many alphabetic characters are in a normal syllable? If you guessed “the same number we can shoot our foveal eye-lasers at” you are absolutely correct.


We are still (and always) glancing from sound to sound (syllable to syllable). When we see something coming into that foveal focus and think we know what it sounds like, we will skip it (like you probably just did with “it”) and move on to the next one, grabbing a little subconscious confirmation on the tail end. Among other things, this means that, long after we stopped “sounding out” words like we did when we were learning how to read, we’re still sounding out words. Every freaking time we see them.

That fact contradicts most of what neuroscientists thought about the issue for most of the past century. But modern studies have basically put the issue to bed – and did it in a way that surprised most researchers who formed hypotheses before we had the tools to really study this stuff scientifically.

Why Jane Doe, Fiction Writer, Should Give a Shit About Any of This

There are two key elements to how our (which is to say our readers’) eyes focus that can be enormously important to keep in mind when we’re writing. The first is that, to a fluent, adult reader, this process a subconscious. It is also extremely fast and efficient. So fast and efficient that most neuroscientists did not believe it (“sounding out” words) occurred in fluent readers for most of the time they’ve been studying it. The change came with advances in brain scanning technology, which confirms what used to be the minority view – we all still “say” every word as we read.

More importantly, when confronted with new (which is to say, unfamiliar sounding) words, we stop using the fast, efficient and subconscious approach. We basically revert to our basic, first grade version of “sounding out” words. Which sucks. This includes:

  • Circuitous lexicon proffered to elucidate our erudite palaver (i.e., snooty douchebag words that make us look smart).
  • Characters who speak with accents, requirin’ y’all to bees phone-etically spellin’ ert hissuns dialogue.
  • People, places and things in fantasy or sci fi (or anywhere else, for that matter). Thog’s Slayer is readable at a glance. The Glerphitities Schelphngbot of Xyphitites is just a pain in the ass.
  • Anything else that takes the reader away from the magic formula: Five letters or less make a sound, I know the sound those letters make, I can move on to the next sound without thinking about the letters.

Another key thought here is a play on (and important distinction from) the writing adage “never use two words when one will do.” I love that concept – keep your writing as clear, simple, and direct as possible – and it’s been stated in different forms by everyone from Thomas Jefferson to Ernest Hemingway to Stephen King. But the truth may be more nuanced. It’s entirely possible that two one-syllable words achieves the goal of clear writing for the reader in a way that one, three, or four-syllable word never will.

Understanding that we read in syllables – sounds – can be a game changer. As writers, we tend to think (and talk) in terms of words – word count, words per sentence, how many words did you write today, per-word rate for freelance work, etc. The Buddha talked about words, Shakespeare talked about words, there is no doubt that words are a big deal. But it is important to realize that for more than 99% of our existence as a species, words were sounds and only sounds. As writers, we have a tendency to forget that fact, thinking of the word as a thing in itself, not as a sound or couple of sounds that represent an idea. At their core, is all words have ever been.

Brief Recap of the Your Brain on Words Series to Date

We’ve covered a few key concepts that converge at this point, so this is probably a good time for a brief summary:

1)    Humans communicated through sound almost exclusively for (depending on who you believe) about 2,000,000 years or 200,000 years before we ever tried writing.

2)    Our first writing consisted of ideograms, which were not based on sound. However, ideograms only lasted about 500 years – beginning to end – before phonetic alphabets replaced them. During those 500 years, a tiny fraction of the human population was capable of reading or writing, basically just a few kings and priests.

3)    From alphabetic use of hieroglyphs 2,700 years ago to today, human writing systems are all based on symbols that represent syllables (e.g., Chinese script) or letters (like these) that combine to form syllables.

4)    Through millennia of trial and error, we ended up with various writing systems that all share one thing in common. They each allow the human eye (which evolved to hunt and get laid, not to read) to most efficiently convert the system’s symbols into syllables (sounds) that the brain can process as sounds. You’re doing that right now.

5)    As a result, words that throw common sound combinations at us flow by smoothly for the reader. Even a made-up name, like “Scrooge” is processed in a nanosecond, because it is easily turned into a sound. Real words, like Otolaryngology stop that subconscious process cold. So do any other words that challenge our “letter-to-sound” process, including jargon, accents, and dialect.

6)    The big takeaway is that we need to seriously think about focusing not on how many words we use, but how smoothly our sounds flow for the reader. Striving for stating things in the “best and simplest way,” as Hemingway put it, is not limited to “the fewest possible words.” Two one syllable words are almost certainly better and simpler for your reader than one four syllable word.

Coming up next…

Next up in this series will be getting from sounds to meanings – how our brains turn syllable sounds into tangible ideas.

Next up on this blog (because these posts take a ton of research and I have a day job and shit) will be: (1) a takeaway from this post about why that spritz thing that is supposed to let you read a novel in 90 minutes should be called shitz because it’s a crappy idea that ignores what’s really happening when we read; and (2) a long overdue explanation of why I just fucking love to cuss.

Not, necessarily, in that order.

This is Your Brain on Words Part Two: Evolution (we’re basically a bunch of primates with books).

In this installment of the Brain on Words series, I am taking a look at the history of the human race as it relates to words on paper. Well, most of the time they were on clay tablets, but you know what I mean.

In the beginning, there was the word…

In the great evolutionary scheme of things, language is a new thing. When and where spoken language first happened has been called “the hardest question in science.” Since half the arguments I have with my daughters involve some variant of “I never said that” about something that was or wasn’t said last week, yesterday, or five minutes ago, it’s pretty easy to see why. Even if I had a time machine and a translator and could go back to the day after spoken language really happened for the first time, I’d probably find some protohuman couple standing in front of their cave and hear one of them saying, “I never said that.”

Fortunately, that part doesn’t matter a hell of a lot to us. We don’t need to get caught up in the debate about whether spoken language evolved 1.7 million years ago, as some scholars think, or 200,000 years ago, as others argue. A few even put it at about 40,000 years ago, though discoveries since the 1990s tend to discredit that view. In this analysis, though, we can just agree that it occurred “RFLTA” (a Really Fucking Long Time Ago). What matters is that homo sapiens were communicating through sound RFLTA, which was also a RFLT before they ever tried communicating through something other than sound.

It took a long freaking time for anyone to write that word down…

Writing – using agreed upon symbols to mean something – is so new that, in a evolutionary sense, the paint is still wet. Between grunting “I never said that” and anything we can really call writing, pictograms started showing up on the cave walls. They communicated ideas, but not through an agreed system of “this means that.” Instead, they just depicted the idea by showing exactly what the idea was. There was no reason to standardize them, and you didn’t need to be “literate” in any language to read them. They were, literally, just pictures:


Cave of Altamira, depicting what appears to be a Red Bull Energy Drink product placement, dated to around 15,000 B.C.E.

In other words, pictures, not writing. Over the course of the next 12,000 years, thanks largely to prehistoric humans’ lack of cable TV and internet access, they had plenty of time to think about standardizing those pictures a little bit. If you want to spend all week painting a beautiful picture, that’s one thing. If all you want to do is say there was an animal, there’s really no reason to go all Michelangelo on it.

So, eventually, a rudimentary system for writing developed. We went from pictograms (I drew you a picture) to ideograms (we’ve agreed this picture represents that thing). Sumerian cuneiform, showing up around 3,200 B.C.E., is thought to be the first, with Egyptian Hieroglyphics arriving around the same time. Cuneiform looked like this:


These appear to be want ads from a Sumerian newspaper, and someone is giving away free kittens.

This still isn’t writing as we know it, but it was a huge step in the right direction. This is certainly not an alphabet. It is a series of pictures that represent nouns and verbs. It was also a huge pain in the ass, with around 2,000 different symbols (although that number dwindled over time).

Ideograms were of limited use themselves, but in them were the seeds for something special. The number of symbols kept dwindling, meaning they had to cover more things. This process seems to have fed on itself until the Egyptians were down to just twenty-two symbols.

Here’s the cool part…

Those symbols no longer represented specific things. They represented sounds. By 2700 B.C.E., Egyptian hieroglyphs each represented a specific syllable that began with a single consonant of their language, plus a vowel (or no vowel) to be supplied by the native speaker. That development is huge. On a whole bunch of different levels.

Why hieroglyphs still matter

Think about this for a second – we (humans) had spoken language between two million years ago and two hundred thousand years ago, depending on who’s estimate you’re using. Not counting the time we also drew pictures on cave walls (since that’s not really “writing”), we had symbols that represented “things” for about five hundred years, total. Then we switched to syllables, the most basic component of human speech. When I say “speech,” I mean sounds we make.

We write sounds. We read sounds. Not counting the 500 years it took us to get from standardized pictures to pictures of sounds, humans have never communicated in any way other than sounds.

A little math shows how important this fact is. For something between 99% of our existence as a species (with the shortest estimate of when speech developed) to 99.99975% of our existence (with the longest estimate), we have only communicated with each other through sound. Either directly, or for a small slice of the most recent little bit, symbols that represent sounds.

It’s no accident our first written language was broken down by syllables. Syllables are single sounds, and sound is how we had been communicating at least since the development of anything we can call language. When we first developed written words to communicate ideas, they were single sounds.

And guess what?

That’s what we still do. And this is the payoff for the prehistoric history lesson. The only thing we’ve done with language since the Egyptians started associating syllables with pictures is tweak that system. The Greeks developed the first “true” alphabet, with consonants and vowels, and every writing system since is either a collection of symbols that represent syllables (e.g., the Chinese “alphabet”) or symbols that combine to form syllables (like I’m doing right now as I type this and you’re doing right now as you read it).


It hasn’t even changed all that much


We didn’t evolve to read

…writing evolved (or was developed) to work with the already existing part of our brain that hears sounds. We haven’t had time to evolve, anyway. In the first place, 5,000 years isn’t enough time to evolve much –unless you’re a virus or other simple organism, Also, hardly any people have been reading for much of the 5,000 it’s been an option. Literacy has ebbed and flowed over the millennia, with a few high points if you happened to live in Rome or ancient Greece at the right time, but for the most part, we’ve been an illiterate bunch for all but about the last 300 years. Chaucer and Dante were writing for the ten percent of the population who could read, but the other ninety percent couldn’t tell Dante from Danielle Steel.

The Bottom Line

Humans have always communicated through sound. Ironically enough, that’s precisely what a writer is doing, too. Our brains have not had time to develop “reading” abilities. Instead, we have created a system that uses symbols to represent (or combine to represent) sounds – i.e., syllables. The part of our brain we use to process written words is the same part we use to listen to someone talk. As far as our brains are concerned, they’re doing the same freaking job.

This is only the first step on our journey through the whole process this series will cover, but it is a crucial one. The first thing a reader does when she looks at letters on the page is (almost always nonconsciously) translate those letters into sounds. Not words, not images or ideas – syllables. Those syllable/sounds are then “heard” by the brain and combined to form words. That process (and the things that can interfere with it going smoothly) will be a big focus of this series. All of it is predicated on the fact that all human language – spoken, sung, written, or however it comes – is the same thing as far as our brains are concerned.

Coming up next…

The weird way we created a system of writing that works ideally with eyes that spent a few million years only worried about hunting and gathering.

This is Your Brain on Words Part One: Series Prologue – er, Forward. Whatever, it’s like a summary but you can skip it if you want.

This post is the first part in a series that will attempt to answer the question: What happens when someone reads a story? The question is simple. The answer, to the extent there is consensus in the scientific community with respect to certain aspects of the answer, is complicated as hell. It’s also fascinating.


This is your brain on words

By “What happens?” I mean, literally, what physiologically happens –from the retinal/foveal response in the eyes through the neurotransmitters all the way to creation of a little mind movie in the reader’s head. By “head” I mean the squishy, wet, amazingly complex organ that evolved for millions of years without seeing a single written word. Spoiler alert: that part about evolution is really important.

Who gives a shit?

Anyone who is interested in building a mind movie in readers’ heads, I hope. Not that I think Dickens or Nabokov gave a shit about neurobiology while writing. They were damned good at knowing its outcomes, though, and those outcomes have a lot to do with why they wrote so well. Their works, as well as every book read before or after, were all consumed in precisely the same way. It starts with a pair of retinas (actually, the fovea within those retinas). If things go well, it ends with the reader’s imagination showing him or her images that elicit real emotions. Being overly analytical, I can’t help but wonder how that magic happens. Also, I don’t believe in magic.

I do, however, believe in making things magical – or not suck, anyway. We can glean a shitload of information from the science that has been (or is being) done on this stuff. Information that can, and probably should, directly color decisions we make about word choices, use of dialect, sentence length, paragraph length, and a ton of other things we, as writers, constantly find ourselves pausing to ask questions about.

What this information won’t do

Tell you how to write a story, for starters. I’m amazed I can’t find the information I’m digging up for this series synthesized for writers anywhere else, because it provides a hell of a toolkit and answers a lot of questions writers frequently ask.

That said, owning a toolkit does not make one a carpenter. By the end of this series, you will understand why the name Ebenezer Scrooge works. Which is to say, what neurological response allowed you to read that name the first time you saw it without being pulled out of the story. Also how that name was crafted to read like words you had seen before, although you hadn’t seen that particular word, and how it allowed your brain to make immediate associations with the character and his personality based on the associative properties of the syllables in the names. I am not, however, saying that knowing that means you’ll be able to write like Dickens.

Tehere are C3RT4IN tihngs a6out wirtnig taht our brainz can D3C0DE even if tehy are worng.

And other things that are difficult for our brains to process regardless of how “correct” they are. Knowing how to lean on the strengths and avoid the weaknesses of a reader’s ability to process what we put to paper is all about making our words do their job the best way they can. Something that is ultimately decided inside someone else’s brain.

The point behind this series is to learn every hack, cheat, and trick our disposal to make that the mind movie run as cleanly as possible in the reader’s head. If the mind movie you have to offer is Ishtar or Son of the Mask, that may not be much of an improvement. But at least you’ll know what the theater looks like on the inside.

Overview of the series – what to expect

We’ll start with history and evolutionary biology. A lot of the murkiness about reading and the brain stems  from how unbefuckinglievably new reading is. (Get it? “brain stems” Bwwaahahahaha) It is so new, in fact, we haven’t had any time to evolve to perform the task. That’s not a problem, though, because we have forced the system of writing and language to evolve to work with existing features from our mostly primate brains.

Then come the eyes. The number of words we really focus on at one time (actually the number of letters, and it’s four) the number and location of the letters we nonconsciously process when we’re focused on those four letters and how our brains decide where to focus next based on that information.

The brain decodes the words. Some are easier than others. In fact, some are so easy, our brains skip them altogether, assuming their presence and intent based mostly on shape. Other words shut off our reading (in the adult, fluent reader sense) and make us revert to tools we used when we were first learning to read; a process that readers, understandably, hate. That was the point behind saying “unbefuckinglevably,” above. The process for determining the meaning of that made up word is entirely different from the process of deciphering (or intuitively knowing and moving on from) every other word in the sentence.

The words have meanings. Even words we’ve never seen or heard before can have direct, concrete meaning based on our intuitive use of language. The entire point behind writing is to create meaning in the reader’s mind. Much of how that occurs (and what can interfere with it) is firmly rooted in neurobiology. Most of that neurobiology was developed for spoken language, and we created written word systems to encode spoken word systems. Those spoken-word brain centers are still what process the written words. And, yes, that was the point behind the “brain stems” joke.

Those meanings create images. This field is new and exciting. The translation of words on paper into pictures –the mind movie. Some things facilitate that, others interfere, and knowing which do what is powerful writing mojo.

Images create emotions. The holy grail of writing – causing a reader to experience genuine emotion. Or, stated in my geeky way, causing the reader to have a physiological response to the words on the page. Something that best happens when the reader has forgotten she is looking at words on a page.

More than anything, the point behind this series is a highly specialized and technical version of putting ourselves in the reader’s shoes. Understanding what the reader actually experiences sheds a bright light on those things that facilitate or interfere with the reader’s experience.

So ends my forward/prologue/overview. Up next: This is Your Brain on Words Part Two: Evolution (we’re basically a bunch of primates with books).

Why I’m done with AbsoluteWrite (Or: the post that lets two dozen people say “LOL, told you so”)

It’s not like I hadn’t been warned about AbsoluteWrite before I signed up. I thought (and still think) that there are some great people there. Some of whom I’m so fond of that I do not take the decision to pull the plug on that board lightly.

But there are also some really weird things about that board. And when I started scratching beneath the surface, it got even weirder. Plus, when someone says “watch what you’re doing, or else” and the “or else” part is “you can’t keep providing free content to my website,” well, yea, fuck you. I’ll “or else” myself right out of there for you.

I joined my first internet bulletin board in the 1990s. Though they’ve almost invariably been about fly fishing, I’ve been a member and/or mod on one or more bulletin boards continuously since. As we say where I live, this is not my first trip to the rodeo. Warnings about bullies don’t do much to deter me. Bullies are trolls with homes, and the day I can’t hold my own in a written exchange with a bullytroll is the day I’ll hand in my interwebz learning permit.

Like any BB, AW had a lot of what I would consider bad advice. Plus, being as big as AW is, things were less cordial than I’m often used to. There’s certainly some truth to the clique complaints I’d heard about, and the groupthink observations were not entirely incorrect (though they aren’t entirely true, either). Those things said, there are also some pretty straight-up people on that board with good insights, and I think the onus is on anyone participating on a BB to learn to separate the wheat from the chaff in terms of advice. So I posted for several months, undaunted by the warnings I’d received. During that time, I also became quite fond of several AW members.

One thing that caused me a certain level of concern –well, annoyance is a better word – was the tendency of a couple of the moderators to both participate in the snark/drama/bullshit on threads and then delete posts and/or lock threads where they’d been participating. That does not happen on a well administered, well moderated board. In fact, the main reason I hate being a mod is because being a smartass is integral to how I function, and it takes an enormous amount of energy to turn that off if I take the job. But I also realize that you can’t do both, and not being a smartass or weighing very deep into disagreements is part of the job description. Such is not the case at AW.

The funny thing is, while I’ve since discovered people with legitimate horror stories about the same AW admin and mods, nothing particularly bad happened to me there. Certainly nothing bad enough for me to buy a domain and start a webpage decrying the place (yes, that’s happened). Or anything that would warrant rating about AW on a site about internet bullying of authors. For me, it’s much more like I went to a restaurant – my server was rude and the food was meh, so I decided I won’t be going back.

One incident that involved me directly was a thread about a legal question. A poster asked about public domain issues if a poem had been circulating as anonymously written for more than fifty years. The family of a dead firefighter claimed he wrote it, but nobody had ever asserted a right of ownership. Prior to any of the specifics, but having been told that the poem had been around as anonymously authored for more than fifty years, I said it would be public domain.

A rather snarky mod then told everyone that I was wrong about how public domain works, you can never lose your copyright to public domain by having something published, and that is not how copyright is determined.

Then she locked the thread.

That was annoying, since there is some truth to what she was saying under the current state of copyright law, although it is far from as cut-and-dried as she believes. In fact, the US Supreme Court heard oral argument earlier this year on a case addressing this question because the circuit courts are split on whether copyright can disappear under the doctrine of laches if you fail to publicly enforce your rights. But, since the poem in question had been published as anonymously authored more than fifty years prior the US Joining the Berne Convention in 1989, she was dead wrong about the applicable law with respect to the poem in question. A fact I pointed out in a personal message to said mod, who told me that the site administrator, Macallister was in agreement that the thread should be locked.

My response at the time was that it sucks a mod would rather keep the wrong answer out there and have the last word than get the right answer out, but – whatever. It was advice that incorrectly told the poster how to stay out of trouble, nothing that would get that person into trouble, so no big deal.

By this time I had noticed some (by no means all) of the AW mods had a propensity to lock threads after issuing decrees about how right they were, often in downright rude ways. It began to dawn on me that the “bullying” I’d been warned so much about wasn’t about people who posted on the boards, it was a warning about the mods running the place. Which is a shame, since it’s otherwise a fairly rich and interesting community. I also avoided the threads where these issues are much more frequent. My participation was limited to writing and grammar questions (well, and a cooking thread).

Last Friday it happened again, on a thread I hadn’t participated in at all. Ironically, the person singled out to be slammed before the thread was locked was the person I probably disagree with more frequently than anyone else on the interwebz. I also respect him, and there is no personal animosity between us – in fact, I’m fond of the guy – but we have ardently different ideas about writing.

Being snotty and locking threads doesn’t do much for anyone, and if I were an admin on that board, I’d probably want to know this was an issue. So I sent a politely worded PM to the site admin, Macallister, telling him/her (I had no idea what the person’s first name was) that, in my experience, mods generally do best if they stay out of the snark. If they aren’t going to do that, it’s usually a bad idea to have them participating, getting in the last word and then locking threads.

The next thing I hear is a super snarky response to a post from said mod,with a post of my own also deleted. I promptly (read: naively) forward it on to the admin, saying, “This would be the kind of thing I’m talking about.”

In response, I get the following (Cryptic? Nonsensical? I think it was supposed to be threatening, but it doesn’t make enough sense to feel very ominous) message from Macallister:

You seem to be making some assumptions about what happen when you’ve been corrected by a mod that aren’t going to bear out well for you, Mooky.

To which I responded, “What the fuck?” Well, I literally responded:

I honestly don’t know what you are saying. I assume it’s a threat?

I (still, literally) do not understand what that person was saying. Which is pretty rough, if you are supposed to be the person in charge of running a board for freaking writers.

At this point, I’m wondering who Ms./Mr. Macallister is, so I look at the profile and discover that Macallister is supposedly his/her first name. The full name is Macallister Stone.


 Needless to say, it took the Google machine about three milliseconds to confirm that MacAllister Stone is not a real person. I also learned that when AW was legally registered to do business (which it no longer appears to be), its registered address was a storage locker (which is a big no-no). Somewhat disturbingly, the fictitious MacAllister Stone was the registered business owner (which is a huge fucking no-no), and MacAllister Stone had also been doing business with people under that name. People who were unhappy, because MacAllister Stone owed them money –which is why it’s such a big no-no to have your imaginary friend own a business.

At this point, I’m getting a pretty iffy feeling about AW. Which sucks, because the wealth of information provided by its members on the “Bewares & Recommendations” portion of the site is extremely valuable. It also sucks, because some of those people are fun, knowledgeable, engaging people who follow this blog and/or whose blogs I follow, and whose company I enjoy. Plus it was a great way to kill time when I was on conference calls, which are how I spend an inordinate amount time.

I don’t want anyone to think I’m storming out of AW, morally outraged. I’m not. I’m mildly annoyed. I just see no reason I should allow myself to continue being mildly annoyed, particularly when someone’s imaginary friend feels compelled to threaten me with…. Well, no longer mildly annoying me, I guess. 

Like all bulletin boards, AW functions on an implied agreement. The board provides a place for me to engage on a topic that interests me. In exchange, I provide page views and content, which happen to be the way boards make money. If Remington MacAllister’s human alter ego thinks she is in a position to threaten to take her half of that deal away, that’s fine. You win. I’ll stop providing free content.

Like I said, I went to a restaurant and the server was snotty. The food wasn’t terrible, but the owner was a pompous ass – and a touch sketchy. There are better places to find a sandwich.

The End.

Post Script (but it’s not an epilogue, so please READ the damned thing)

There are websites that loathe AW for all the wrong reasons. They are also full of information about AW and the whole sock puppet MacAllister Steele thing, much of which turns out to be correct. Some of those sites (not all) having correct information about this aspect of AW does not mean you can otherwise rely on information they provide. Some of their reasons for loathing AW may relate to how much good information AW members provide on the “Bewares and Recommendations” threads, and how well that section steers people away from being scammed. Other information from those sites should be a matter of significant concern and vigilance. It’s pretty easy to tell the sites I’m talking about from sites that may have concerns but not ulterior motives if you pay attention. Please, if you start wandering through these sites, pay attention.

Whistle While You Work (but probably not while you edit)

Music is about as dangerous for me as tequila. The last time I drank tequila I was nineteen years old and in Arizona. Well, that’s not true. The last time I remember drinking tequila was in Arizona. Then I woke up wondering why there were so many Mexicans in my buddy’s living room. There weren’t. There were so many Mexicans around because I was in Nogeles, Mexico. I (reportedly) was convinced that I loved tequila and needed to go to its house and meet its mother or something. My friends –wonderful people, but not exactly paragons of good decision making themselves back in the day– obliged. My love affair with tequila ended that morning.


Music can be almost as dangerous. I have injured myself while working out because music made me feel invincible. If I am going to listen to music while I run, I need to run on a treadmill or I will run into traffic or off a cliff or somewhere else, far worse than Nogales, Mexico. If I need to get pumped up for something, music will do it every time. If I need to calm down and relax, it can do that too. Music has an enormous impact on how I think and act. So much so that I can’t write anything with music playing. If I were going to write with background music, I would need to precisely tailor a song list to match what I was writing, paragraph-by-paragraph, or what I wrote would reflect the emotional content of the music more than my story.

Enough about me.

What’s the Real Scoop on Music When Writing and Editing?

There isn’t one. That’s not the result I expected to find when I went digging through journal articles and studies. And there are an astounding number of studies looking at these questions from every point of view imaginable. So far, aside from some general truisms, they almost all cancel each other out. The most worthwhile thing I could take from several hours of reading abstracts and the entirety of a score of studies is the reason they cancel each other out. That part is fascinating.

One good meta-analysis from Chemnitz University of Technology, Germany, put it like this:

[G]lobal analysis shows a null effect, but a detailed examination of the studies that allow the calculation of effects sizes reveals that this null effect is most probably due to averaging out specific effects.

In other words, one study shows a certain type of music improves performance among a certain group of people doing a specific task, another study shows different music or different people have impaired performance doing the same task, so the net result is an average of zero. BUT, that only means the average is zero, not that either study is wrong. So certain music for certain people helps. One study proves that. Different people’s performance may be impaired by the same music. The takeaway is not that music does not have an impact – it is the opposite. Music is having an impact in both cases. The impact just varies widely based on the other variables.

What are the variables?

Who the fuck knows? I sure don’t. There wouldn’t be a bazillion studies on this if anyone was even close to figuring that out. Some have been identified. One of the most cited studies, The differential distraction of background music on the cognitive test performance of introverts and extraverts, finds significantly different responses to music based on the subjects personality types. Extroverts perform better with certain types of music playing, while introverts performing the same task with the same background music perform worse. The type of task also seems to make a huge difference. There is pretty good evidence that music enhances the performance of even extremely complicated tasks (e.g., neurosurgery) once they are learned and are somewhat routine. However, background music generally interferes with the process of learning a much simpler task (e.g., memorize a random list of letters).

You would never want to learn to perform neurosurgery with music playing, but once you know how to do it, you probably do it better with the right music playing in the background.

If that weren’t enough, the real world adds a whole different pile of variables to the lab studies. I mostly write at home at night, after putting the kids to bed. My house is nearly silent. Tonight, however, a half dozen fourteen-year-olds will be having an ironically named “slumber party” at my house. Dozens of office-noise studies tell me that, while silence or white noise would be optimal, since I’m not going to be getting any of that, music will be less disruptive and distracting than a bunch of disruptive intermittent noise. Tonight, music may increase my productivity. Tomorrow, it probably would not.

What’s the Bottom Line?

Music undoubtedly affects our perceptions and our functioning, but how much and in what way varies so much by individual, there aren’t any real guiding principles. If you’re an introvert, even music that matches the mood you’re trying to capture in your writing is likely to interfere. If you’re an extrovert, that same music can help significantly. In either case, you can set your mood pretty effectively by listening before you start –almost all the studies show the right music can have beneficial effects before starting tasks.

As a rule (which means I’m sure there are plenty of exceptions), cognitively demanding tasks are performed better without music. There is a time to be upbeat and optimistic. Looking for unwanted commas and adverbs that need killing is not that time. How much this will affect you, though, depends (again) on so many variables the true answer ranges from “Don’t do it, you’ll never edit well listening to music you like” to “You can probably edit about as well with or without, so if it makes the task more pleasant, go for it.” One study in particular found that cognitively demanding tasks – while performed less well with any kind of music – were actually performed better when subjects listened to a voice saying the number “three” over and over.

So look for my new CD to drop soon: Mooky & The Mookettes, Three: three, three, three (three three three three).

Ultimately, though, the state of science on this topic puts it into a familiar category of advice on this blog: figure out what works for you and do it. If your writing is carefully structured linguistic constructs like Joyce wrote, music will probably do more harm than good regardless of your personality type. If you’re an extrovert, you’ll almost certainly write better with music you like playing. Or at least feel like you are writing better and be less fatigued by it. If you are an introvert, silence is golden. If you’re as sensitive to music affecting your mood as I am, the perfect song may get you started, but without a good playlist to keep things in that mood, all your characters would seem like bipolar, menopausal, chemically imbalanced, pubescent teenagers trying to find tequila’s house.

It’s not in Nogeles.

The Innate Talent Question: Thus Spake Überdouche

Well, the Talent Wars have flared up again. The fight over how much “talent” is a factor in a person’s writing ability almost invariably gets ugly. And, if you scratch the surface, it’s pretty easy to see why.

I try not to be a complete asshole when interacting with people, whether on the internet or standing in line at the airport. It’s usually not all that hard. When this issue comes up, though, I have to consciously restrain myself. If we were in a bar, there would be a fight. This argument flat out pisses me off.

There’s a legitimate reason this pisses me off, and it goes beyond the standard, frustrating internet discourse loop:

Opinion –> Counter Opinion, with supporting evidence –> Opinion stated more strongly –> I’m not making this up, here’s a journal article with metadata –> Opinion stated angrily –> Look, there’s no reason to get angry –> Name calling.

On this issue, it’s all I can do to stop from being the angry, name calling part of that equation. This is unlike most internet shitstorms — I couldn’t give a fuck whether you outline or not, as long as you don’t state inaccurate facts or tell other people they are doing it wrong. On this issue, I am personally invested in the very real impact of the discourse itself. Not because I lay awake at night questioning my own talents (I sleep just fine, not giving a damn whether I have any talent). But there are some kids (which I mean literally, high school aged) who I coach and mentor and care about, who do worry about that kind of shit. Often, kids with emotional issues (way beyond the issues we all have at that age) and family situations that are dicey as shit. Not uncommonly, having spent a lifetime being told they’re worthless pieces of shit.

Nobody else will ever be able to convince them they are wonderful, not worthless. They have to decide that for themselves. And I’ve seen it happen – almost miraculously, and well over half the time. A bit of encouragement leads to a shred of success that leads to increased interest that results in a bit more success, more work/practice, more success, until the kid finally looks around and thinks “Holy shit, I’m one of the best people in the state, region, or country at something. I’m GOOD at this.”

Want to make sure that kid stays down? Tell her at the outset that whether she can be good at something is determined by some cosmic special sauce she either was or wasn’t born with. Because if there’s anything the parents, school system, sometime even the foster care and or juvenile justice system have taught a lot of these kids, it’s that they were filled with useless shit when everyone else was getting special sauce. They’ve never had success at anything, so that seems true. Why bother?

The hardest thing to get those kids to do is realize that they can control outcomes. That they can use dedication and learned skills — even their own horrific experiences — to compensate for other kids’ supportive backgrounds, loving parents, and douchey prep school blazers. I can think of no better way to keep a kid like that down than to tell her “it’s not up to you, it just depends on whether you got sprinkled with magic faery dust when your were born.” Or wherever the fuck “talent” is supposed to come from.

There’s another, less horrible, but almost laughably arrogant, statement implied in that as well. Let’s see, you’re a member of a writing community and are working on a novel and/or have completed other novels. You believe that only certain people have been graced by the cosmos with a limited-edition gift that gives them a (quite literally) God-given right to be better writers than lesser humans who merely work hard to learn and hone their craft. Gee, any chance you think you fit into that category of cosmically blessed, divinely graced, faerie-dust sprinkled literary Übermensch?


Yea, you can go fuck yourself about that. I’ll take a kid who was told she was mentally retarded for the first nine years of her life. The kid who was on so many medications, she was basically stoned from kindergarten through middle school. Straighten that kid’s meds out, give her a decent work ethic, and I’ll take her over you and your “talent” every fucking day.  Überdouche.

So, is talent a factor?

Meh. Maybe, at the extreme top levels of performance. As is so often the case with these things, the real answer is: Who gives a fuck? There could be some brain chemistry going on that would separate the Nobel-level writers from, say, Vonnegut. Maybe even separating Vonnegut from Elmore Leonard. The latter being such a nuts-and-bolts writer, coming up through the magazine then pulp/genre writing career path, that I doubt he’d attribute his success (or even the brilliance of some of his writing, which, at times, is brilliant) to any kind of cosmic special sauce. I know scores of people who actually write better than Dan Brown, though he does a good job of coming up with a story. The same is true of Stephanie Meyer. If “talent” is really a thing that results in great writing, I wouldn’t say either of them got much. But they both came up with great stories to tell that people wanted to read. Which is what makes them gazillionaire writers.

There are a lot of factors, and how each plays into a given person’s success is going to vary. An encouraging childhood with a lot of practice is not an option for a kid with illiterate and abusive parents, so that kid’s level of interest will have to be far greater to land her even close to the same place. But, one way or the other, some cocktail of several issues is going to be at play:

  • Practice — Whatever you want to call it. Dedication, hard work, the willingness to study and improve, writing a million words or for ten thousand hours or whatever.
  • Interest level — Someone obsessed with a subject at age five is probably going to be one of the world’s leading experts on that thing if she remains obsessed until she’s 50.
  • Childhood and adolescent environment – this, more than anything, is what I think gets mislabeled as “talent.” There are also mountains of data on this, since we standardize test the hell out of kids. Is there a single factor that will heavily influence how well kids do on the English portion of the SAT, ACT, or any of the elementary basic skills tests? Hells yes — their parents’ median income. That predicts the outcome on standardized tests so well, we could probably save ourselves a lot of time and money and just score kids based on their parents’ W-2s.  In a trial to calculate the damages (lost future earnings) of a child who was killed or permanently impaired, trial economists on both sides rely primarily on one consideration. The parents’ education levels. Not their income, jobs, criminal histories, color, whether they’re married or divorced, or anything else. The parent’s education level correlates to future earnings even more than the kids own grades and test scores. That’s how much childhood environment eventually plays out (statistically) in your future.
  • Opportunity/luck — right place/right time, or whatever you want to call it. I have an accountant friend who was assigned Microsoft as a client at his first job at an accounting firm (because Microsoft was a ten employee company and did not have a full time accountant yet). A friend from college was a limo driver in Vegas with an idea for a TV show about Crime Scene Investigation units, who lucked into a chance to pitch his idea to Jerry Bruckheimer. In big and little/good and bad ways, I don’t think this can be ignored as a factor.

Those are all things that happen before you get to the idea that someone is somehow predestined to be wonderful at something or imbued by God or Zeus or whoever with some magical gift. Since syntaptic connections in our brains are ridiculously flexible during the first 5 years of life, I think a lot of what we are calling “innate” is anything but. I don’t have a special debate gene, but I have a ten-year-old daughter whose favorite weekend activity is going with me to judge debate rounds, which she’s been doing since she was four. If Joyce Carol Oats was extremely close to her father and he was a Volleyball coach/former captain of the Olympic Volleyball Team, you think she’d have won the National Book Award for Them, or do you think she’d be one of the great women’s volleyball coaches of all time?

  • Talent? Meh. Fine. There is probably some ideal combination of chemical and environmental factors that would make someone who worked at least as hard as someone else, and who had at least as much exposure and support, and who had at least as much luck marginally better. To some people, anyway. Since writing is subjective, the differences are bound to cut both ways with some readers. So, even then, it’s going to be a matter of opinion whether that “talent” thing went with writer A or writer B. Which shows how unbelievably stupid the whole argument is in the first place.

The Bottom Line

1)      Tell a kid who appears to suck at everything she does when she’s 14 (because she sucks at everything she does when she’s 14) that she needs cosmic special sauce to be good at something, I may well punch you in the throat.

2)      If you want to walk around believing you have been imbued by the cosmos with special writing sauce, go for it. But it’s probably best to keep that a secret. By which I mean, we don’t really want to hear about it. Like you probably don’t want to hear how much of a douche I think you are.

3)      When someone shows me a writer who has diligently worked to hone her craft for ten years who cannot rise to the literary level of Dan Brown or Stephanie Meyer (I’m talking literary level, not commercial success), I’ll worry about talent. Until then, I am going to keep reading, writing, and reading.

Brief update to add a new source:

Just tagging this on, because researching my next post (the impact of the type of music you listen to on tasks like editing), I ran across another article basically debunking the “talent” myth. It’s from American Psychologist, and is available free, courtesy of M.I.T.:


The Trouble with Prologues

This should really be a five word blog post:

Prologues are trickier than shit.

The end.

Notice there’s no advice. The second rule in Elmore Leonard’s ten rules of writing is “Avoid Prologues.” At first this sounds like Mr. Leonard is telling us not to use prologues. Until you realize that the Rule 1 and Rule 3 don’t start with the word “avoid.” They start with the word “never.”

Avoid means steer clear of, think twice about, shy away from. Never means, well, never. Ever. Not even once. That’s a big difference. Particularly when Mr. Leonard’s comments about that rule consist largely of a brilliant example of someone (well, not someone, John Fucking Steinbeck) using a prologue.

From a pure writing standpoint, the answer is probably this: If you really know what you are doing and execute correctly, there’s nothing wrong with having a prologue. But remember, prologues are trickier than shit.

So tricky, in fact, I think writers who don’t already have a publishing (by which I mean novels) track record need to avoid them if at all possible.

The Real Problem with Prologues for Querying Writers

I think the entire prologue situation (both the problem itself and the extent to which writers exaggerate that problem) was summed up beautifully by Angela James, an editor for Carina Press (a Harlequin digital first imprint). She said:

Of course, I’m an editor, and if you’ve heard it once you’ve probably heard it from an editor or agent: we’re not always fans of prologues. I think this has morphed into authors saying that we HATE prologues, but that’s not true. What’s true is this: we see a lot of stories come through our slush pile that start with prologues, and 9 out of 10 times, they’re not necessary.

I’m willing to bet she speaks for virtually every agent and editor in the business when she says it begins – and ends – with “We’re not always fans of prologues.”

That’s far from “never do it or you will immediately burst into flames and the souls of your loved ones will be doomed for all eternity,” which is how a LOT of writers tend to treat the issue. Still, it’s a really good idea to avoid them if you can.

Why other people’s prologues suck (not yours, I’m sure yours is wonderful)

Prologue problems come in two flavors: Problems with the prologue itself (which we will call problems with other people’s prologues, because, seriously, I’m sure yours is wonderful) and problems intrinsic to having and querying a novel with a prologue (which we will call the real problems with having a prologue).

Problems with Other People’s Prologues:

  1. They are often used as info dumps, with all the attendant problems of info dumps.
  2. One of the most common agent/publisher complaints about beginner novelists is that they start the novel two or three chapters too early, before the story really gets going. A prologue adds a fourth chapter of “too soon.”
  3. Readers imprint on the first MC they meet, like baby ducks imprint on the first thing they see and follow it around assuming it’s their mama. The prologue MC usually isn’t the book MC, so readers feel cheated when you switch to your real MC.
  4. Many readers skip them, which means they need to literally be prologues — the story needs to stand on it’s own, completely independently from the prologue. So, by definition, it has to be extra stuff.
  5. If it’s not an info dump, it’s probably backstory, and backstory is generally a very bad way to start a novel.
  6. Compared to working the prologue information in through flashbacks or directly through the narrative, a prologue is an easy way to get it out there (which is why the info dump/backstory concerns are so valid).
  7. Chapter One has to manage to introduce characters and setting and lay a lot of groundwork for a story. That’s hard to do without being boring. Some people use prologues to throw something exciting on the table first, in an attempt to “hook” the reader. I think this fails. It comes off as a gimmick, then you leave the reader with your boring Chapter One (possibly more boring, since you think you’ve taken the pressure off) and the reader goes from exciting prologue to boring chapter and thinks “the first real chapter of this book sucks.” It’s like having a date show up in a Ferrari but then having him drive you to Taco Bell.

There are certainly more, but that gives a decent idea of why, as Ms. James put it, “9 out of 10 times, they’re not necessary.” Worse than not necessary, the things those other writers are trying to do through the prologue – provide backstory and worldbuild, start with something interesting, etc., are the things that separate great writers from the good. Great writers build incredible worlds and provide deep, rich backstories throughout the narrative core of their books.

The Real Problems with Having a Prologue

The real problem with having a prologue, even if it’s both necessary and brilliant, is: Seriously, prologues are trickier than shit.

For starters, they present logistical problems. You’re ready to query and the agent you are querying asked for the first three pages or your first chapter or whatever. Does that mean your prologue, or Chapter One?

According to literary agent extraordinaire, Janet Reid a/k/a the Query Shark, “your first five pages” or “first chapter” obviously means the first part of the novel, not your prologue:

The five pages you attached don’t mention either character or any of the plot you cover in the query letter. It’s as though you sent five pages that have nothing to do with this query.

That’s one of the (many) problems with prologues. When you query with pages, start with chapter one, page one. Leave OUT the prologue.

Nathan Bransford, on the other hand, says that “first 30 pages” obviously means the first 30 pages that are part of your book:

I want to see the first 30 pages as you want me to send them to the editor. If that involves a prologue… let’s see it.

Oops. Those are agents (well, in Nathan’s case, now an ex-agent) who blog a lot about what they expect and want to see, and the advice is diametrically opposed. If I had to guess, I’d say more agents probably agree with Nathan, but that’s a guess. I doubt Janet is completely out in left field, so it’s safe to assume a significant portion of agents agree with her take as well. Either way, having a prologue puts you in a potential “fucked if you do, shitfucked if you don’t” situation.

There’s also the problem of Pavlov’s agent (or, worse, reader). Imagine having 200 queries and sample pages to wade through in a day. Ten of those had prologues, and all ten treated you to boring-ass worldbuilding, backstory info dumps. You open your 200th query, and discover it’s the eleventh to start with the word “Prologue.” At this point, you expect it to suck. There’s a 90% chance you’ll be right. You’ve been conditioned to expect it to suck. Maybe even conditioned to think it sucks.

It’s not your prologue’s fault. It those ten other, stupid, needless prologues that came before it. But you’ve been tainted by association. Now, at best, the reader is looking to see how much of an info dumpy, backstory filled piece of shit your prologue is, not objectively looking at how good or bad it is. Prejudice is an ugly thing, but it’s also a real thing.

The Bottom Line on Prologues?

In this case, it’s also the top line. Prologues are trickier than shit. If possible, you should avoid having one. I don’t think agent’s and editors hate them, I don’t even think most readers skip them (although I’d bet that’s more of an issue with YA readers, for example, than with lit fiction readers). But I do think they bring a host of new problems to the party, even if they don’t suffer from the problems that are endemic to prologues generally.

Put differently, there is the way you dress for a job interview, the way you dress on your first day of work, and the way you dress when you’ve been working the same job for a few years. Prologues are a pair of shorts and a T-shirt. Even if that’s how you’ll be showing up down the line, right now you’re interviewing and it’s probably best to clean things up for one day. It certainly won’t hurt.

UNLESS, you absolutely understand exactly what I’m saying here, see the problems, are positive you aren’t providing background, worldbuidling, info dumping, garbage, and know that your story really, really needs a prologue for a very specific reason that can’t be handled through the body of your narrative.

Because there are some jobs – lifeguard, surf/snowboard/skateboard sales, marijuana dispensary clerk and/or gardener – where you just look like a douche showing up for the interview in a suit.

Or, to summarize through the miracle of meme generation:

Grammar, Style, and Usage are Three Different Things

Advice on grammar, style, and usage are often confused. This can be particularly troubling when, for example, style advice is touted as grammar advice. If you don’t think that can happen, just turn on grammar check and, assuming you use MS Word, look at all the green squiggly lines underneath grammatically pristine phrases. I type my blog posts in MS Word then cut and paste them into the blog. There are two in this paragraph already – one because the phrase “advice is touted” is passive and a second because the prissy little fuck doesn’t like contractions.

The distinctions are important, because ungrammatical phrases (which have a few subcategories of their own) are usually a problem. Style preferences are just that, preferences. Usage changes so much that it is one of the keys to communicating effectively with your audience. When grammar check or beta readers or other people providing critiques mistake a question of style for one of usage or grammar, for example, things get muddled. At a minimum, it helps to have a handle on the nature of the advice being given.

What is Grammar?

There’s a reason we call it “grammar school” and not “syntax school.” Grammar is less of a category than a broad term that covers a bunch of different fields of study. That can make things confusing, because the importance of any particular “rule” of grammar depends heavily on which category it comes from. From a linguistic perspective, there are areas of huge importance within the scope of “grammar” that we, as writers, don’t need to worry about. Not that they aren’t important, they’re just so ingrained in a fluent English speaker that we don’t need to give a shit about their linguistic/grammatical formulations.

Language came first, probably about 200,000 years before the first linguist showed up on the scene. This happened long before the invention of writing, when our ancestors were all either hunter/gatherers or sold Geiko automobile insurance and looked like this:

To be honest, more than one of my relatives and about eighty percent of my fishing buddies still look like that.

Linguistics came along to study languages after languages were a thing. And a whole slew of the grammatical issues they study (like why we know the word “cars” means more than one car, even if the Chinese use a completely different set of rules to make that distinction) just don’t matter if you can speak fluent English.

The part of “grammar” we really need to worry about is syntax and construction. The stuff that makes sentences make sense. Syntax is the building block for construction – it’s the basic rules that let us understand what the hell each other mean when we say something. “We ate pie,” is basic subject, verb, object syntax saying what happened in a way that English speakers understand. The cognitive construction involves the listener, who probably knows (from the context the speaker gives and/or pie stains on her shirt) whether “we” refers to the speaker and one or more other people or the speaker and the listener.

This is a really important distinction, because the listener’s (or, more likely, reader’s) role is key. It trumps everything else. Brilliant syntax is meaningless without cognitive construction. Put another way, lexicological formulations achieve Floccinaucinihilipilification when opacified through superfluous bullshit.

There are no green squiggly lines under that last sentence, because it’s grammatically pristine. It’s still a train wreck. Grammar, in itself, does not make writing clear to the reader. It often helps, but it’s merely a guide to what the reader expects to see. That’s the heart of what the “rules of grammar” really are. They are a guide, written down after the fact, to how we say things in the most easily understandable way. “Pie we ate” is probably still understandable, but it sounds weird (because it’s not the standard subject-verb-object formulation we use in English). It takes more work for the reader to understand the meaning, because you sound like Yoda.

Those are some important factors to keep in mind when thinking about grammar. Specifically:

  • Grammar is backward looking. First people communicate in a way they all find works, then linguists assign rules to explain what it is that lets it make sense.
  • The only reason we have “rules of grammar” is to give context to the information we are being provided.
  • Something that follows the linguistic structure for written English (i.e., is grammatically correct) can still suck balls. If you put a cardboard box on top of the foundation for a mansion, you haven’t built a mansion.
  • A phrase or sentence that is readily understood by the reader in a pleasing and predictable way has accomplished what the “rules of grammar” seek to accomplish, whether it follows those rules or not.

Don’t infer from the above list that I’m not a fan of following the rules of grammar. Outside dialogue, I do so the overwhelming majority of the time. But it is important to realize that the “rules of grammar” are a means, not an end. Occasionally, the choice is presented between a clearer sentence that breaks the rules or adherence to the rules at the expense of clarity. In those cases, clarity wins.

What is Style?

My favorite definition of style comes from Orson Wells, who wrote, “Style is knowing who you are, what you want to say, and not giving a damn.” My English professors (with one notable exception, who was awesome) did not agree. Style was something they often confused with grammar, imposing arbitrary little rules affecting voice and tone as “rules,” despite the fact that they have nothing to do with proper syntax. If you find someone who thinks the correctly named (but otherwise often incorrect) Elements of Style is a grammar guide, you have identified this problem.

Style, to an editor, is a set of rules that fill in the gray areas left by broad grammar rules. Issues like whether parentheses or em-dashes should be used to set off a particular clause, or how to hyphenate a set of compound numbers that combine to form a big-ass compound adjective. To a writer, style can best be defined as the set of preferences that aren’t syntactic rules. A subject worthy of it’s own post, but including:

  • Don’t begin sentences with conjunctions (there is no grammatical basis for that “rule”);
  • Don’t split infinitives (following this one sometimes screws things up in a big way);
  • Do not use contractions (that’s probably correct for most formal writing);
  • Never end a sentence with a preposition (what’s that all about?)
  • Passive voice should be avoided (by zombies!)

Those and other style preferences, which may be fucked in their own right, are often incorrectly pointed to as “rules of grammar” by writers, writing instructors, stupid green squiggly lines, and others. They aren’t. They’re basic bits of advice and nothing more.

What is Usage?

There’s a reason these three things need to be discussed in this order. Grammar is the basic syntactic framework of language, the nuts and bolts that allow two people speaking the same language to understand each other. We don’t use nouns to introduce adverbs in sentences without any predicates. We don’t even have to think to avoid doing those things, because most of those rules are hard-wired into how we think.

Style as the more flexible but easily definable constructs that (often) make grammatical sentences more easily understandable. There is no grammatical prohibition against using four negatives to state one positive, but it’s confusing as hell. There sure as hell isn’t a grammatical or linguistic reason for preferring active voice over passive, but it’s advisable 95% of the time. This is a gray area, and it’s like Velcro for bad “grammar” advice.

Usage, is that next level down (or up) toward genuine readability. It is the difference between:

“At least one individual in the overwhelming majority of U.S. households..”


“Every family’s got one…”

Just as people often call style issues grammar rules, matters of usage are regularly confused with style. And, in the true “style” sense, that’s fair. Strictly speaking, though, that flair that Welles was talking about relates more to usage than style. If you think of writing as an inverted pyramid, Grammar is the wide base – it includes every possible way of communicating a thought that will be recognized by an English-speaking person as correct. Style narrows that field down, eliminating the most tortured and inapt ways of expressing that thought. Usage is the tip, containing only those ways of stating a thought that will clearly resonate with your reader. It is, literally, the way we use the words to create an image. You need a reasonably good handle on grammar and style before you can focus on usage, but for a writer, usage is what matters most. It is the part of writing that is concerned with the reader’s reaction – not just her ability to decipher. It is the most flexible of the trio, and it is constantly changing. Grammar, by contrast, slowly evolves.

Why does this matter?

Because it gives context to advice. In truth, aside from fragmented sentences, violations of grammar rules are rarely, if ever, a good idea. But a whole shitpile of opinion about style and usage is dressed up as grammar advice, which gives both far more weight than they deserve. For example, here is a list from an article in the Guardian entitled: 10 grammar rules you can forget: how to stop worrying and write proper:

  1. Split infinitives (style, not grammar)
  2. Ending a sentence with a preposition (style, not grammar)
  3. Subjunctive verb form (usage, not grammar. I also think they’re wrong – but it’s a usage question, so I can)
  4. Double negatives (this is probably in a gray area between style and usage, clearly not grammar)
  5. Use of “between” to refer to more than two things (this is about as usagey as usage gets)
  6. Use of “with,” “by,” or “of” with an adjective like “bored” (if possible, this is usegier than 5)
  7. Using gerunds (verbs turned into nouns by adding “ing”) (style, not grammar)
  8. Conjunctions at the beginning of sentences (I agree. But this has nothing to do with grammar)
  9. Use of singular verb with the word “none” (which is based entire on the usage of “none”)
  10. Using try twice in a sentence (until I saw this, I had no idea it was a rampant style problem).

See the problem? There is not one fucking grammar rule on the list. If someone points out a grammatical mistake in your writing, you almost certainly need to fix it. If someone points out a usage issue, you need to decide how much that person’s take on usage is in line with your reading audience.

There’s a big difference.

Post Navigation