39 pages • 1 hour read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Summary
Chapter Summaries & Analyses
Key Figures
Themes
Symbols & Motifs
Important Quotes
Essay Topics
Tools
This section addresses the numerous biases that do affect the media. Commercial bias—the “biggest” (62) of the biases—assumes that news needs novelty, “conflict and momentum” (62). Bad news bias preys on the human brain's tendency to "care about anything that remotely threatens us" (62). In a graphic depiction of this bias, gigantic bugs labeled with words like “socialism,” “sexting,” and “anthrax” swarm cartoon Gladstone.
The status quo bias refers to humans' preference for “things to stay the same” (63) and only change with benefits “guaranteed to be huge—and the risks miniscule” (63). Because of this, the media tend to “ignore any position that advocates radical change” (63). Associate professor of journalism Andrew Cline says the status quo bias manifests as a belief that "the system works" (63). The “mainstream media” (63) thus refuse to question “the structure of the political system” (63), or the “American way” (63).
Access bias affects source material. “The price of admission” (64) to the “halls of power” (64) can be steep. By agreeing to use sources who wish to remain anonymous, journalists acquire good quotes but deny their reader context for these quotes. This kind of journalistic self-censorship can have a detrimental effect on journalistic quality. Gladstone sees no value in these kinds of mitigated source relationships.
Humans prefer news accompanied by images, known as a visual bias. For example, The Washington Post published the first front-page story on “the torture of detainees” (65) three months before the Iraq war even began. However, it wasn't until April 28, 2004—when pictures of the torture at Guantanamo Bay surfaced—that the public began to care about the torture coverage.
Narrative bias—humans' preference for stories with a beginning, middle, and end—affects stories that don't have tidy endings. These include science stories, in which continuous research and progress effectively doesn't allow for a neat narrative structure. The media try to fix this issue by “tacking on a provisional ending” (65) to these stories. In Neufeld’s accompanying illustration, Gladstone’s cartoon self gestures to televisions with opposing headlines like, “Fat makes you fat!” (65) and “Fat makes you thin!” (65). Gladstone claims that presidential campaigns use narrative bias to construct stories with reusable plots and defined characters—one candidate as the “scheming emasculator” (66), another as the “mendacious prig” (66), and another as the “savior” (66). Sometimes, though, candidates change their stances, rendering these narratives useless.
Sometimes, biases combine and result in “thrillingly misleading reporting” (67). Gladstone offers as an example coverage of the 2003 toppling of Saddam Hussein's statue in Al-Firdos Square in Iraq—a “visual/narrative bias combo” (67). The media portray this as initiated by Iraqis and taking place in a crowded square. However, reports have since surfaced that it was a US Marine gunner who initiated the toppling. Reporters “zoomed in to make the sparse crowd” (68) of Iraqis look larger. Fox News replayed the video of the statue falling every 4.4 minutes, and CNN replayed it every 7.7 minutes.
Fairness bias causes some journalists to “bend over backward to appear balanced” (69) in an effort to give equal time to opposing viewpoints—even if one viewpoint is clearly unequal. For example, during the Bush-Kerry presidential election, a group funded by a major Republican campaign donor ran attacks against Kerry. The ads accused Kerry of lying to get “two decorations for bravery and two of his three Purple Hearts” (69) during his service in Vietnam. The ads “blanketed” (69) talk radio and TV news anchors and newspapers repeated their information. Kerry's Navy records and interviews with his crewmates quickly “debunked” (69) the ads, but this evidence drowned “in a miasma of mainstream media weasel-speak” (69), thus forever tarnishing Kerry's reputation.
In 2005, the Scientific American ran a satirical response to critics in their April Fool's Day issue. John Rennie, the magazine's editor, aimed the response at consumers who criticize their “hideously one-sided” (70) coverage of evolution. Rennie wrote that as good journalists, they should "present everybody's ideas equally" (70), even those that “lack scientifically credible arguments or facts” (70). Who's to say that US senators, politicians, or “best-selling novelists” (70) don't know anything about evolution?
This section covers war reporting from the American Civil War through the Gulf War. War reporting uses nearly every media bias. For example, the military can “bar, expel, and jail reporters” (71), hence creating an enormous access bias in coverage. The narrative bias also “comes into play long before” (71) troops arrive. To sway public support for a war, the government supplies the public and the press with a plot that details the imminent threat and “the enemy's depravity” (71). For example, in October 1990, a young Kuwaiti woman named Nayirah testified before members of Congress that Iraqi soldiers invaded the hospital where she volunteered in Kuwait City. The soldiers “took the babies out of the incubator […] and left the babies to die on the cold floor” (72). The “atrocity was a fiction” (72), concocted by an American public relations firm. Nayirah herself was the daughter of the Kuwaiti ambassador to the US. Nevertheless, Nayirah's testimony made a difference in getting the “bare majority” (72) of senators to vote to go to war.
The American Civil War sets “the template for future war journalism” (78) when the press utilizes the latest technology—the Morse telegraph machine and photography. Editors tell their reporters to telegraph all news and to “send rumors” (74) if they can't send truth. With these instructions, reporters—mostly pro-Union—fabricate battles and events they “haven't witnessed” (75). In response, War Secretary Edwin Stanton bars reporters from the front and issues press releases for the reporters to use—"unedited” (76). Stanton, in control of all telegraph lines, imposes a news blackout and “hysterical rumors” (76) run rampant. Additionally, reporters begin to use bylines for their articles.
On September 17, 1862, New York Tribune reporter George Smalley “quietly attaches” (77) himself to the Union troops at Antietam. Smalley writes of this “bloodiest day in US history” (77) without impartiality. He reports on the ineffectiveness of General Joe Hooker's troops, resulting in an inconclusive outcome. Smalley's report is sent not to his editors but to President Lincoln himself. Upon receiving the report, Lincoln fires General George McClellan.
Though he opposes entry into war during his 1916 presidential campaign, by 1917 Woodrow Wilson “wants to sell it” (79). Wilson forms the Committee on Public Information to spread “war fever” (79) by “equating dissent with loyalty—and demonizing the enemy” (79). Newspapers fill with CPI-propagated reports. CPI chief George Creel says these articles are “propaganda in the true sense of the word […] the propagation of faith” (79).
During World War I, France and Britain outlaw most reporters from the front. American reporter George Seldes remains confined to London and spends the first few years accepting outrageous reports from New York and Europe, including unsubstantiated accounts of crucifixion of Canadian soldiers by Germans. The US government prohibits publishing of photos of American dead. When the war ends, Seldes interviews German Supreme Army Commander Paul von Hindenberg; Von Hindenberg admits that “The American infantry in the Argonne won the war” (81). General John Pershing court-martials Seldes for crossing into German territory and censors the interview.
In World War II, many American journalists are “embedded with the troops” (83). CBS newsman Edward R. Murrow reports from London where he “risks his life on perilous bomber runs” (82) and often “wears a uniform” (82) to show his loyalty to his country. Reporter Ernie Pyle writes columns from the front where he is later killed by Japanese machine-gun fire. Pyle receives a Purple Heart—a rare honor for a civilian.
The US government continues its practice of censorship, best demonstrated in the dropping of the atomic bombs on Hiroshima and Nagasaki. The official story from the White House calls Hiroshima “an important Japanese army base” (83). This is not true; Hiroshima does contain an important military base, but the US dropped the bomb in the middle of the city. Bombing Japanese civilian centers “undermines morale” (84) but would not play well with the American public. Most American newspapers run a full press release written by William “Atomic Bill” Laurence, a “long-time A-bomb advocate, now on the Pentagon payroll” (84).
After dropping the second atomic bomb on Nagasaki, Laurence wins a Pulitzer Prize for his “extensive reporting on the atomic bomb” (84). However, Laurence fails to mention the radiation that kills more and more Japanese who would have otherwise survived the bomb blast. Australian journalist Wilfred Burchett reports on the devastation of atomic radiation on citizens of Hiroshima, but the “official position” (85) is that the bomb contained “no lethal radiation” (85). Laurence continues to act as the US government's mouthpiece, saying the Japanese fabricated radiation as propaganda.
As in World War II, the “media and military are united in a common cause” (87) at the start of the Vietnam War. Video journalism shows “images of brave boys, fighting for nothing less than the American way of life” (87). The American public watches updates from Vietnam every night in their living rooms. The broadcasts rarely show gore, but in 1965, CBS airs footage of American soldiers evacuating Vietnamese villagers, then setting fire to their huts with “flame-throwers and Zippo lighters” (87). President Johnson has nasty words with CBS President Frank Stanton.
In January 1968, American forces “soundly defeat” (88) North Vietnamese “regulars and guerrillas” (88) in the Tet offensive. Despite this military victory, Walter Cronkite wonders on his newscast whether America isn't “mired in stalemate” (89) in Vietnam. The Pentagon, Johnson, and later presidents believe the media “distorted the truth, and weakened America's will to defend itself” (89)—called “Vietnam Syndrome.” This so-called syndrome—referred to famously in a speech by President Ronald Reagan to the Veterans of Foreign Wars—prevents Americans from seeing American intervention in foreign affairs as positive.
By 1991, Reagan's indictment of Vietnam Syndrome prevails. President George H.W. Bush calls for, and receives, half a million troops “against the Iraqi invasion force in Kuwait” (91). During this siege, American media were not embedded as they had been in Vietnam. Instead, they attend daily press briefings filled with the latest technology but “not one word of assessing civilian casualties” (91). The American public voices neither concern over the Pentagon's censorship, nor curtailing of certain kinds of journalism. Instead, they tune into television stations reporting “ecstatic appraisals” (91) of this “very efficient war” (91).
The 2003 Iraq War under President George W. Bush does include embedded journalists. However, the Pentagon has strict rules about where they can and cannot be. They prep reporters in “boot camps” (92) and keep them behind lines where they can see “where the missiles were launched—not where they landed” (92). NPR embedded reporter John Burnett's initial positivity quickly turns to frustration. Burnett leaves his assigned unit and finds himself in a small village that's just been destroyed by US Air Force fire. The Air Force claim their “precision-guided bombs aimed at tanks and track vehicles” (94) hit all their targets, but Burnett now sees otherwise.
Objectivity in American journalism “emerges as a selling point” (96) in 1833 when the price of newspapers drops from six cents to a penny. The New York Sun starts this price drop as it shifts from relying on funding from “deep-pocketed political parties and a few thousand well-heeled subscribers” (96) to “eager advertisers” (96). This price drop allows for content changes, including “more local politics, more crime, more drama, more scoops” (96) and less special interest—driven by sponsors. Cheap papers flood city streets, rife with advertisements that make the “penny-paper magnates” (97) rich.
“Impartiality” (97) becomes important as a selling point, and publishers like James Gordon Bennett vow they will “support no party […] and care nothing for any candidate” (97). However, Bennett also runs articles that call Republican nominee Abraham Lincoln a “fourth-rate lecturer” (97). Early news publishers get to define “impartiality” (97) which makes it difficult for readers to discern the truth from special interests. While some publishers—like Adolph Ochs at the New York Times—prioritize commitment to impartiality, this alienates readers who are used to a thin line between “facts” and “values” and to reading stories that confirm a vision of the world as cruel but “rich with opportunity” (99).
Ochs puts forth a promise to deliver the news “impartially, without fear or favor, regardless of party, sect, or interest” (98), so long as it's devoted to “the cause of sound money and tariff reform” (98), and advocates for low taxes and limited government. In making this statement, Ochs positions the Times as a paper that “favors information over narrative” (98) and “the ‘facts’” (98) over readers’ emotions or values.
Gladstone argues, though, that it's “unprofitable to ignore your readers' emotions, assumptions, and values” (98). And as writer Walter Lippmann believed, Ochs' commitment to objectivity hides the fact that Ochs finds “edification […] more important than veracity” (101). “True opinion” (101) without facts cannot compete with fact-based truth. Lippmann urges reporters to use "methods of science to discipline their minds and scrutinize their facts" (102). While Lippmann views objectivity as a process, others view it as a writing style, using simple sentences with “no emotion” (102) or “first-person pronouns” (102).
Another shift happens in the 1920s following World War I. As artists, writers, and veterans decry the atrocities of the war, awareness spreads that “governments lie, that newspapers lie” (99). A dissonance between facts and values emerges. In this void, "public relations pioneer" (101) Edward Bernays exploits “peacetime markets” (101). Bernays markets smoking in newspaper ads as a “symbol of women's liberation” (101).
However, by the mid-twentieth century, creating public consensus becomes the driving force behind the news media. The public craves “legitimized” (102) news sources that prioritize objectivity. The emerging television news media need “audiences of unprecedented size” (103), and the US government needs “political consensus and ideological conformity” (103) in order to wage the Cold War with Russia. For once, the news media and the government's aims work in “symbiosis” (103). CBS newscaster Walter Cronkite, once named the “most trusted man in America” (103), exemplifies this kind of new journalism. Cronkite gives nightly news of just “facts, unseasoned and served deadpan” (103). His views provide a “kind of national mirror” (103) of a unified American identity: white, Christian, and middle class. The problem, though, is that this identity is not representative, nor is the news he reports objective. Rather, journalists in the mid-century see “no conflict between objectivity and anti-Communism” (104). Nightly news reports cover bomb drills and staged nuclear attacks.
In an interlude, Gladstone appears in the frame holding up a donut. She explains that historian Daniel Hallin divides the journalists' world into “three spheres” (105). The donut hole represents the “sphere of consensus” (105) which holds “unquestionable values and unchallengeable truths” (105). The donut itself represents the "sphere of legitimate controversy" (105) where “objective” journalism “thrives” (105); it includes issues nudged out of the sphere of consensus. Outside of the donut is the “sphere of deviance” (106) that contains “irrelevant” ideas that challenge “the mainstream of the society” (106). The press, Hallin argues, act as gatekeepers of the “limits of acceptable political conduct” (105).
To see how this gatekeeping works, Gladstone offers an example from 1909. That year, Missouri Senator W.J. Stone strikes a black waiter in a Pullman dining car, citing “bad service” (106). Stone is acquitted of assault, but the New York Times “rebukes” (106) the senator. The Times quotes Stone as calling the waiter a racist slur and indicts Stone for his obvious racism. According to Gladstone, the “sphere of consensus moves” (106); in 1909, hitting a waiter for bad service fits within the sphere of consensus. The Times brings his actions into the sphere of legitimate controversy and deems them racist and dishonorable to the office of the Senate. However, “the waiter's perspective” (106) falls into the sphere of deviance. Given the unchecked white supremacy of that time, reporting on the waiter is seen as “unseemly advocacy” (106).
Gladstone debates how best to “serve the news consumer” (108) in these cases and concludes it's not by guessing at consensus, following polls, “mechanically allotting equal spaces” (108) to both sides, or choosing facts to “fit a viewpoint or promote an outcome” (108). It’s better for reporters to figure out how they “really feel about the issues they cover” (108). Some reporters, like Washington Post executive editor Len Downie, claim it's best to not take a position on any issue. Others, like political writer Michael Kinsley, advise against becoming a “political, ideological eunuch” (109). Kinsley believes it’s better to hold opinions and suppress them for the sake of objectivity.
Gladstone argues that in the current media climate, newspapers and newscasts “rarely go past official statements” (110). The media have tried to “build a wall between the editorial pages and the news pages” (110). However, now “entire cable news channels” (110) deliver their news through “obvious political prisms” (110), and websites Americans use for their news sources make “no secret of their ideological leanings” (110). As technology develops, web-based news media become driven by “tracking online behavior and targeting ads to individuals” (110)—harkening back to the turn of the 20th century when ads dominated objectivity.
As writer David Weinberger claims, in the 2010s “transparency is the new objectivity” (113). The goal of modern reporters isn't to “make the world better” (112), but to inform the consumer so the consumer can make the world better. In order to gain the consumers' trust, many reporters now use the Internet to disclose “their views, values, process, and—whenever possible—their sources” (113).
Most bloggers and even some print journalists opt for “full disclosure” (113). The ear of reporters as “dispassionate marble gods” (114) is over, says Time magazine's James Poniewozik. Poniewozik believes now is the time for reporters to “admit that, like responsible citizens” (114), they care about elections but are capable of putting their biases aside for the sake of objectivity.
Gladstone writes that while news consumers claim to want objectivity, they tend to consume news outlets that “reflect their views” (115). A 2006 study designed by a Stanford University professor and a Washington Post poll director found that Republicans consistently chose Fox News stories over CNN and NPR, while Democrats chose NPR and CNN, but with only “‘lukewarm’ preferences” (115) for them. The study concluded that either Democrats find NPR and CNN “insufficiently slanted” (115) or that they are “less inclined” (115) to seek out news that confirms their biases.
Having trustworthy, transparent, and objective news media is not quite enough to remedy the situation. Humans are “driven more by impulses and biases we never knew we had” (117), which affect our ability to judge our preferences and make decisions. For example, a 2008 study conducted at the Max Planck Institute in Leipzig found, using brain scans, that they could “predict people's decisions” (119) about which of a series of buttons to push “seven seconds before the test subjects were even aware” (119) of having made a decision. Additionally, multiple studies have shown several kinds of biases present in American perceptions of people based on gender, race, and weight.
Bias awareness can help with discerning truth, but studies have also shown that if a statement is repeated often enough “people will believe it, even if it's labeled as false” (122). Gladstone offers as a recent example the mass destruction in Iraq.
According to psychologist Leon Festinger, when individuals are presented with “unequivocal and undeniable evidence” (124) that their belief is wrong, they experience something called cognitive dissonance and react in one of a few ways. Some will “emerge […] even more convinced of the truth” (124) of their beliefs than ever before, and others may even feel a need to convert people to their view. Festinger offers as an example a doomsday cult who, upon each subsequent false prophecy, recruited new members.
While some skeptics say they'll believe it when they see it, even visual evidence is sometimes fallible. For example, a woman who was raped later identified a man she saw on television as her attacker. The man was innocent; he had merely appeared on the television while the woman was being attacked and her brain “conflated” (126) the man's face with that of her attacker. Additionally, “photoshopification” (127), or the ability to easily doctor images, means individuals now find it “easier to disbelieve documents and photos that are real” (127)—such as the photos of torture at Guantanamo Bay.
As consumers struggle to “consciously filter” (128) the “cloudy water” (128) of the news media, they must come to understand how their brains work.
In this section, Gladstone addresses the relationship between contemporary technology and the human brain and the Internet as an influencing machine. Appearing as a bird in a flock of others, Gladstone reports that humans “instinctively drift” (129) toward people like ourselves—called “homophily” (129). The Internet's “ability to link like-minded souls everywhere” (130) fosters this perspective-reaffirming phenomenon. As American legal scholar Cass Sunstein writes, people who only talk to like-minded others tend toward extremism and to “marginalize the moderates […] and demonize the dissenters” (130). Gladstone worries that the more the Internet enables people to “better edit and augment our preferred reality” (131), the less tolerant and more stunted people will be “intellectually or morally” (131).
Author Nicholas Carr expresses a different fear about the Internet: “Is Google making us stupid?” (132). Carr writes that “someone, or something” (132) has been “remapping the neural circuitry” (132) of his brain. No longer able to sit, immersed in a book or lengthy article, Carr blames the Internet's pace and instantaneous access to information.
Gladstone addresses the various “histrionics” (133) toward technology over the past 500 years. New technology is always alleged to “destroy our concentration, memory, communities, our mental and physical health” (133). The same was once believed about television, radio, and even books.
The Internet, with its infinite depth, has given users unprecedented access to, and choice of, information. In 1999, researchers Sheena Iyengar and Mark Lepper conducted a test with consumers that confirmed “too much choice breeds apathy and paralysis” (137). Swiss psychologist Benjamin Scheibehenne tries and fails to replicate this study's results nearly a decade later. Though it would seem the Internet would easily produce “information overload” (138), “a constellation of aggregators, social networks, traditional news outlets” (138) and more use algorithms and other methods to construct information filters. Thus, Internet users are not overwhelmed.
While it seems that humans’ tendency toward homophily would stunt the Internet's infinite possibilities for human connections, researchers have shown this may not be the case. In fact, concludes Pew researcher Lee Rainie, most Internet and cell phone users have “bigger and more diverse networks” (140) than those who don't use these technologies. Internet users behave “like information omnivores” (140) by seeking out “more arguments opposed to their views” (140), rather than avoiding opposing viewpoints, as Gladstone has posited.
In evolutionary terms, using the Internet does seem to be, like the fictional influencing machine, changing the way humans think. Tools changing human brains is not, however, a new or limited phenomenon. Anthropologists now tend to agree that humans and “their tools ‘co-evolved’” (143). For example, humans began walking on two legs after starting to carry clubs for hunting and defense. They also developed bigger brains after they began walking. Some suggest that, regarding the Internet, humans' cognitive styles are experiencing a shift from “deep attention to hyper-attention” (142). As evolution of the human brain has shown, this kind of change is both inevitable and unstoppable.
This is not necessarily a bad thing, writes Gladstone. She cites a study that measured brain activity of both “computer-literate” (144) and “inexperienced” (144) computer-using adults while they performed “a simple Internet search” (144). The MRI found “twice as much brain activity in the Web-savvy group” (144), particularly in areas that control “decision making and complex reasoning” (144). Hence, Gladstone concludes, Google isn't making them stupid; if anything, it is making humans smarter.
In 2009, a biomedical engineer posts to Twitter using a “cap with sensors that could read his brainwaves” (145). Scientists are now working to develop this technology without a cap; Intel predicts it will “succeed by 2020” (145). This technology could help with cures to diseases like “Parkinson's, epilepsy” (146) but—in the wrong hands—could be used to "remotely hijack or control neural devices" (146). It will be up to humans to determine whether "their machines do more harm than good" (146).
According to “visionary inventor” (148) Ray Kurzweil, the “real and the virtual” (148) will merge to form “the Singularity” (148) by 2045. This means by 2045, the same technology that shrunk computers from the size of a building to the size of a pocket will be used to create “nanobots—blood cell-sized devices” (148). These nanobots will be able to enter human bloodstreams to “repair” (148) bodies, or thrust humans into “virtual reality from within the nervous system” (148). Kurzweil praises the Singularity as the “ability to reach beyond our limitations” (148) as humans. On the other hand, virtual reality pioneer Jaron Lanier believes the Singularity is a myth, like the Christian Rapture. Human existence on Earth could end through “plague, asteroid, or alien invasion” (149), or global warming, nuclear war, or “a takeover by our own artificial intelligence” (149).
What are humans to do in the face of these rapidly-evolving technologies and their effects on the media we consume? Gladstone advises that we must play an active role in our media consumption, trust reporters that “demonstrate fairness and reliability over time” (150), and read “the original documents they worked from” (150). People can “assemble in networks of peers” (150) and bring to light unreported situations, such as voter fraud or human rights abuses. Harvard professor Yochai Benkler writes that peer-based “spontaneous networks” (150) can have an even greater impact on others than the media at large. Gladstone continues to argue in favor of “less restrictive intellectual property laws” (151) and “an open Internet” (151) where everyone can develop and share their inventions—regardless of device manufacturer.
Journalist Robert Wright claims that technology is “no guarantor of moral progress or civility” (152). It's up to consumers to hold the media accountable and consume responsibly. Wright calls the current moment a test of both the political and “moral imagination” (153), and we will either reach “a new equilibrium” (153) at a higher level than ever before, or “we could blow up the world” (153).
Unlimited access to news from everywhere doesn't cheapen that news. All news “is relevant” (154) in an increasingly politically interconnected world, and as consumers with unlimited access, we can “act, easily, to spread […] and even influence” (154) the news narratives. It's not the media who are the enemy of the consumer, but the “neural impulses that animate our lizard brains” (155) that present real limitations.
What influence do the media have on the actual outcome of a war? In answer, Gladstone directly connects Pershing's censorship of Seldes' interview with von Hindenberg and the rise of Hitler and the Nazis. Though critics blamed the media for Vietnam Syndrome, the media's reporting did not sour the war effort until the mid-1970s. A Gallup Poll concluded that public support for the Vietnam War dropped by 15 points as the number of American soldiers injured or killed rose by a factor of 10. According to William Hammond of the Army's Office of Military History, it was “flawed strategy, bad intelligence” (90) that caused support for the war to drop, not the press.
In a 1991 study conducted by the University of Massachusetts, Amherst finds that those polled estimated 100,000 casualties in the Vietnam War; the actual number is closer to 2 million. Researchers conclude that the narrative of Vietnam was reframed to fit an image of “an irresolute, half-hearted military campaign” (91) which was weakened by “the objections of the anti-war movement” (91). The American public begins to shy away from overt anti-military demonstrations and watch television news that keeps them ignorant to the “history of the conflict, the politics, and the region” (91) of Kuwait. While things change a bit in the 2003 Iraq War, the Pentagon still controls what journalists can and cannot see and write on.
Gladstone and illustrator Josh Neufeld often use ironic humor in their illustration choices. For example, when discussing modernism, Gladstone appears as a Duchampian, bearded Mona Lisa. Gladstone sits calmly on a memorial marker for the Battle of Antietam, the bloodiest in US history. She shoots a cell phone into her arm intravenously like a drug user. Balanced by appropriate reverence, these humorous illustrations keep the sometimes dry information from overwhelming the reader.
Plus, gain access to 8,800+ more expert-written Study Guides.
Including features: