This novel in three sections follows a nameless man on a journey west. Flat, neutral-sounding declarations meander around a variety of encyclopedic topics — firearms and mass shootings, but also homosexuality, autism, and the goth subculture. The language becomes increasingly simplified and fragmented. This 2018 edition reflects current events and was generated with up-to-date text and links from some of the writers struggling the hardest to produce explanations.
The 2018 edition is for sale as of July 4, 2018. Produced on the MIT Press Bookstore Espresso Book Machine. Edition of 13 (corresponding to the original 13 states) + 3 artist’s proofs (red, white, and blue), numbered and signed by the author/programmer.
The author originally planned to regenerate, copy edit, and produce a limited edition of Hard West Turn annually using an independent bookstore’s print-on-demand machine. The production aspect of this project became impossible, however, so the 2018 edition will be the only one in print.
The 2018 edition was copy edited and designed by the proprietor. It was proofread by him with the kind assistance of Stephanie Strickland. Specifically, spelling and punctuation corrections were made, with U.S. spellings now used throughout. Sentences with proper nouns that remained were manually removed. No other changes were made to the output, which derives almost entirely from the English and Simple English Wikipedias.
Responses to the first draft · About the author · Readings & exhibits · Sample pages · Code link & excerpt
The first draft of the generating program was written in November 2017 for NaNoGenMo (National Novel Generation Month). Here is what some of the people who wrote and read NaNoGenMo novels said about the first draft, generated that month:
“maybe the greatest compliment for a generative book: i read the whole thing, in one sitting” —Everest Pipkin
“I’m blown away. This is not just well executed, this is where generated text starts to enter the realm of real art.” —Filip Hracek
“... the one I was most impressed with is Hard West Turn by Nick Montfort. He wrote a program that basically searches Wikipedia for accounts of recent shootings in the US, extracts words, sentences, and sentence fragments from them, and then reassembles them with some connecting words and sentences of its own ... The result is oddly powerful ... the piling of sentence upon sentence describing fragments of atrocities, all jumbled together, creates an overwhelming impression of this wave of awfulness in modern American society. The fact that we know that these have been dispassionately pasted together by a computer somehow enhances the effect ... as the text progresses, new themes appear in it, and the sentences begin to break down as phrases and, increasingly, single words are just repeated over and over again, as if the computer itself can't handle the material any more.” —Jack Reyn
Readings & Exhibitions
The 2018 program that generated this book is, like the first draft, offered under a permissive free software license. This 2018 code no longer functions because of changes in the structure of Wikipedia articles, but it has been updated to the 2019 program. You may generate your own book, modify the code, or do whatever else you like with the program. Here is a short excerpt from the 2018 code; the overall program is only 270 lines long and 11KB in size:
english = 'http://en.wikipedia.org'
simple = 'http://simple.wikipedia.org'
mass_shootings = english + '/wiki/Mass_shootings_in_the_United_States'
html = urllib2.urlopen(mass_shootings).read()
soup = BeautifulSoup(html, 'lxml')
deadliest = soup.find('span', id='Deadliest_mass_shootings').parent
. . . .
for count, rel_url in sorted(((links.count(e), e) for e in set(links)), reverse=True):
if 1 < count < 14:
article = simple + rel_url
html = urllib2.urlopen(article).read()
soup = BeautifulSoup(html, 'lxml')
content = soup.find('div', id='bodyContent')
new_paragraphs = 
for p in content.find_all('p'):
for paragraph in new_paragraphs:
blob = TextBlob(paragraph)
for s in blob.sentences:
string = str(s)
string = all_lowercase(string, content.getText())
if string is not None:
if ',' in string and re.findall(r'\(', string) == \
string = string.split(',') + '.'
if string[-3:] == 'm..': # Sentences ending "a.m.." and "p.m.."
string = string[:-1]
for string in simple_litany:
for string in litany:
for string in degenerate_litany:
if ' ' in string and len(string.split()) < 5 and \
',' not in string and '(' not in string:
degenerate_litany.append(string[:-1] + ', ' + string.lower())