Category: Posts

On The History Manifesto

It has been nearly a year since Jo Guldi and David Armitage published the History Manifesto in October 2014 (their article length version was published the prior April). In that time the open access publication has garnered significant attention- both laudatory and critical. The most notable and biting has been Deborah Cohen and Peter Mandel’s critique as part of the American Historical Review’s Interchange and final (for now) rejoinder.

Where the two sides most disagreed was the general direction of the historical profession and the time scales on which it works. Related to this, whether longer time scales raised the prominence of historians in public debates. It is now clear that Guldi and Armitage misread Ben Schmidt’s analysis of history dissertation timescales, which have actually increased since the “specialization” phase of the historical discipline. To their credit, they have made edits to certain footnotes and the original chart

There is also something to Guldi and Armitage’s argument that historians role as public intellectuals has declined over the past fifty years, despite Cohen and Mandel’s anecdotes about amici briefs in Lawrence v. Texas.  In addition to the cited data on mentions of economists vs historians in the New York Times, a review of op-ed pages shows how few historians sit among the legion of macroeconomists.

I also appreciate Cohen and Mandel’s argument that some historical questions impacting policy decisions may be answered most effectively at shorter timespans than hundreds of years. The most cited article in the Journal of Policy History is Susan Reverby’s study of Public Health Service syphilis experiments in Guatemala from 1946 to 1948. Anecdotally, some of my favorites have been works that reviewed policies and phenomena on a longer scale. For example, the AHA brief in Obergefell v. Hodges and the works cited within it like Nancy Cott’s Public Vows examine the relationship between marriage and the state, showing that it has not been rigidly fixed. Armitage and Guldi suggest other ideal topics for long histories, like global governance, climate change, and economic inequality and indeed these are some of the most pressing public policy challenges of our time. Despite the arguments over time scale, all four scholars agreed that impacting policy was a noble goal for historians.

They are equally in agreement with the “usefulness” of digital techniques. What Cohen and Mandel don’t address and what I see as one of the strengths of The History Manifesto is that the nature of digital work rewards analysis of the long time spans. Most historical works over long periods of time generally rely primarily on secondary literature. A scholar can only visit so many archives and read so many letters, after all. With computational techniques, the barriers to longer histories are much lower. These techniques can still work on a microhistory level, allowing minute and important details to be scrutinized. The strongest works of digital history scroll between these micro and macro levels, providing the data and aggregate insights required by policy makers, while still describing the deep context of the best humanities work. As an example, I think of Visualizing Emancipation by Scott Nesbit and Ed Ayers. They simultaneously show the impact of Federal policy on slave emancipation and the agency of slaves as told by their acts of defiance against the Confederacy.

There are many significant challenges in using the new digital techniques, including paywalled digitization of sources, copyright, and the nature of humanities data. But the promise certainly exists for digital scholarship to behave in the ways that Guldi and Armitage suggest it will. I hardly have the experience to challenge either group of distinguished scholars on their analysis of twentieth century historiography and its impact on public policy. I also feel that there isn’t a specific “crisis of the humanities” within the University besides the general crises of the public funding for the University. Yet I do intend to heed Jo and David’s call. Digital techniques are uniquely suited to see across significant temporal and spatial scales and a deep contextualization can best inform how we develop and evaluate  future government policy.

Mapping In a Material World

Two weeks ago I made a few statements about mapping that I now regret. In my post on networking, I argued that networks provided a link between statistically driven text analysis and the visually driven map. Networks could be used to visualize topic models or other textual analysis outputs. Networks could also provide a more mathematically rigid framework for maps (thinking specifically of ORBIS). This week, I realized that most historical uses of maps are actually very mathematically or statistically driven. Especially projects, like Andrew Beveridge’s work on racial segregation, that are GIS based using national level databases.

I also learned that geography and mapping ask similar questions compared to my field of environmental history. In some ways, geographers have even moved past determinist ideas of nature that sometimes drive environmental historical scholarship. In Tim Cresswell’s introductory textbook, I realized that many of the ideas have a strong relevance to my potential dissertation topic. While the idea of regionalism is mostly passe in Geography, the idea of the “region” of the United States could have an interest for my work on Federal environmental policies and the impact of policy on region. Other humanities driven geographical frameworks ask similar questions of movement and space over time that I would be interested in exploring in map driven portions of my dissertation.

The digital and spatial history are a particularly happy marriage as they have similar strengths: changing scales, computing large quantities of data, and examining different perspectives. I hope to examine the “thick descriptions” which Todd Presner, et. al. use in HyperCities. The many disparate sources, layers, and perspectives can provide a variety of new meanings when placed in the same geographical space. A major caveat of HyperCities being that it doesn’t actually work on my computer (Google Earth plugin not available for linux. I guess I’m not the intended audience for their work?). From the introduction’s caveats, functioning is not a major goal of the project which is intended to be “prototypes, experiments, code modules, and projects that more or less work.”1)Todd Presner, David Shepard, and Yoh Kawano, Hypercities: Thick Mapping in the Digital Humanities (Harvard University Press, 2014), 13. Most dissertation committees won’t accept an idea of a map, unfortunately.

I also have to be aware of the many challenges that humanities GIS practitioners have in Spatial Humanities. There appears to be a major gap between the theory of HyperCities and the reality of Spatial Humanities, that is not a function of technological expertise. Where one set of authors show the possibilities of mapping complex humanities data, the others explain the difficulty inherent in formatting the complex heterogeneous data of the material world into a comparable homogeneous format comparable in GIS software.

As I plan to document changes in the physical world over time and rigorously examine policy decisions, statistically where applicable, geographical perspectives hold much promise for me. There are many possibilities as long as I can ensure that my work does not become one of the unreproducible prototypes that litter digital history.

 

References   [ + ]

1. Todd Presner, David Shepard, and Yoh Kawano, Hypercities: Thick Mapping in the Digital Humanities (Harvard University Press, 2014), 13.

Visualizations in Digital History

A data visualization is a graphic that meaningfully organizes data or information in systematic and multidimensional ways.1)David J. Staley, Computers, Visualization, and History: How New Technology Will Transform Our Understanding of the Past, New York, M.E. Sharpe, 2003, 3 As Edward Tufte suggests in his 2006 book Beautiful Evidence, visualizations are not just a way of seeing, but of showing. I believe these two steps are crucial in the use of visualizations for digital history. As we pile up our digital sources, we must have a way of “reading” this data and efficient use of visualizations are one of the only ways to do so.

If we need to be an expert in mapping, text analysis, or networks in order to be a digital historian, we must at least be proficient in creating visualizations to “see” our work. It is impossible to “read” a 10,000 line database or even understand how 100 topics line up across 2,000 texts. Visualization helps digital historians to gain empirical understandings from and actually observe their data in meaningful ways.

Of course, a scholar “observing” a database is likely an expert in that field. Presumably, they can organize data patterns in ways that others may not understand. There is a big difference in “seeing” our work versus “showing” or arguing how our work impacts history. A visualization created for our own use need not be properly labeled, or contain dense but clear, data rich graphics which Tufte advocates. Yet to become commonly accepted evidence and advance an argument, a visualization must meet these higher requirements.

Where writing has been the traditional historians craft, the ability to organize data into rich and meaningful visualizations will become that of the digital historian. Historians must also be skilled in reading visualizations to interpret and evaluate their underlying arguments. By reading visualizations, we must analyze the appropriateness of the underlying data. Is it actually homogeneous and precise? Does the visualizations argument actually misread the data? Or might the argument be made more forcefully by a different form of visualization. More than many of the underlying analytical techniques, skill in creating and evaluating data visualizations may be the most crucial to budding digital historians.

References   [ + ]

1. David J. Staley, Computers, Visualization, and History: How New Technology Will Transform Our Understanding of the Past, New York, M.E. Sharpe, 2003, 3

Networking the Digital Humanities

In reading about network theory, there are elements in which networks are fundamentally different from Digital History’s other methodological frameworks: textual analysis and mapping. Like textual analysis, there is rigorous mathematical theories theories behind the practice of analyzing networks. Unlike textual analysis, networked data is much more agnostic. As Scott Weingart says, anything can be networks, even though it shouldn’t be. Maps are similarly heterogeneous with their data, allowing a wide variety of stuff to take up space in a variety of ways. Yet there is less structure or fewer mathematical rules in maps than traditional graph theory’s logical tendency to connect components or behave in certain ways.

Given these similarities and differences, it’s no surprise that network theory can act as a kind of glue in the digital humanities. One of the more powerful ideas that I read about and am starting to understand more intricately is how textual analysis traits can combine with networks to provide additional insights or explorations. Specifically how topic models can form networks to connect documents in new ways.

Networks can provide a rigor to maps that can unlock how events occur and are related to one another. As an example, I’m thinking of ORBIS. In looking at a map of the Roman Empire, historians can imagine how the trade and travel networks interact with the physical challenges to these networks. Elijah Meeks and ORBIS actually provide a certainty that can be rigorously analyzed with other information about the Roman Empire to approach new insights about the use of space.

In combining networks with other techniques in digital history, scholars can provide additional insights to their sources. Networks make textual analysis more richly connected. Geographic networks are more thoroughly contextualized. When used properly, network theory can connect aspects of digital history. Even when initially focusing on other methodologies, researchers should not forget about the power of networks and their ability to answer historical questions.

Yet, I would pause in my enthusiasm for networks. Because they are mathematically determined and have strictly theorized relationships and behaviors, there is less agency for individuals within the systems. This is a critique of many aspects of the digital humanities and one which we will discuss with Johanna Drucker’s conception of data in Graphesis. David Easley and Jon Kleinberg discuss urban segregation in their book Networks, Crowds, and Markets and particularly how actively segregating neighborhoods follow a particular pattern of balancing called the Schelling model. Anyone who has seen Washington, DC’s demographics change over the past decade is aware of the forces this model describes. Yet, it is equally incorrect in describing school attendance patterns between 1954 and 1975. In this case, the Supreme Court in Brown v. Board mandated that segregated schooling was not constitutional. Later decisions required that schools maintain racially balanced schools. In this case, public schooling did not proceed under the Schelling model, and it’s an important caveat to remember in assessing the power of networks.

 

The Clue

As an early experiment into macroanalysis, Franco Moretti attempted to uncover how generic detective stories became Sherlock Holmes. In his essay “The Slaughterhouse of Literature,” one of ten in Distant Readinghe concluded that the plot device of the clue was what separated Sir Arthur Conan Doyle’s work from other forgettable subpar detective novels in “the great unread” of the time period. The clue worked for readers because it was a connection between the past event/crime and the present discovery. Readers particularly enjoyed making this connection. The clue worked as a trackable device which Moretti could follow in his distant literary reading of the creation of “detective novels” as a genre.

But the clue is also a critical signpost for historians. Many have compared historians to detectives. Even PBS. In Teaching Hidden History, another class I’m taking this semester, we curate 12 “objects” which teach students about a period in history. Our objects act as “clues” to connect students with past events. In the pedagogical example, I used clues to create narrative just as historians work with evidence.

To return to the process of distant reading or text mining, the coder must unearth which clues they are attempting to follow, just like a historian performing a traditional close reading in the archives. What is different is the scale at which they doing this analysis. The computer “reads” every word, just like a human, but it is up to the human to determine the units of analysis  at which the computer reads. As Moretti and Matthew Jockers confirm, the skill of the user is just as essential and macroanalysis will not make close readings obsolete.

One of the criticisms of digital history, more than a few decades into its disciplinary life, is that it has not created the insights promised by the powerful technologies. The types of questions and analysis performed by distant text mining will necessarily be different than traditional close reading, but digital historians still need to follow clues to their conclusions. Moretti’s slaughterhouse was the 19th century book industry, molding literary form to popular tastes. Like Conan Doyle, digital historians must ensure that our algorithmic black boxes don’t obscure our narrative but provide clues to the past.

Digital Scholarship

Digital History is frequently defined as “using computers to research, present, or teach history.” Last week we explored some pedagogical aspects which are unique to digital history (teach). In future weeks, we will discuss the many research tools and techniques which form a part of digital history analysis and research practices (research). So this week’s readings defined how scholarship works differently in digital formats (presenting on the web).

At its most basic level, scholarship has similar aims in digital or analog formats: to transmit ideas, start conversations, and earn reputation. The primary differences being the apparatus around the scholarly publishing arena and the digital publishing arena. This “apparatus” comes down to evaluation, transmittal, ownership, and tone.

Traditional evaluation oversimplified is a mix of peer review, editor discretion, and disciplinary awards. Peer review determines whether a work gets published at all, a problem that does not exist for online works where the barrier to publish is almost non-existent. One of the problems inherent in the firehose of published online works is to “catch the good” as Joan Troyano suggests the Journal of Digital History and DHNow have done.

Another major difference in how peer review works for digital scholarship versus traditional scholarship is who is the peer review group. Instead of anonymous reviewers or editors, many digital projects have gatekeepers that include grant funders and links from other experts. More than in traditional publishing, but less than in popular media, eyeballs are the coin of the digital realm. The most important metric in this regard is time spent on a page rather than pure number of hits. We’re academics, not advertisers after all.

Finally, digital scholarship is occasionally comparable to traditional scholarship in structure. While there are sometimes similar formats- scholarly editions of texts in digital or traditional formats have comparable purposes if different capabilities. Collections of digital objects have some similarities to physical object collections.  But increasingly a digital project looks totally different from articles and monographs. These different structures– born digital narratives, thematic research collections, and interactive scholarly works must also be evaluated differently. We don’t question whether a book’s binding adds to its argument, but we must necessarily question a site’s architecture, UX design, and access.

So far I have primarily focused on the similarities between traditional and digital publishing. Yet there are some square peg aspects of the digital which cannot be forced into the traditional round holes of peer review, scholarly communication, and tenure/promotion. Traditional scholarship prioritizes the single author. Digital environments have always been interdisciplinary and collaborative, with multiple partners bringing their interpretive and technical skills to a project. This makes it difficult to assign credit for particular aspects in the ways which tenure and promotion committees prize (let alone for the Altac scholars who deal in different metrics of career success).

There are many other aspects of digital publishing which differ from traditional publishing- tone, format, and where do you put games?! Yet one of the most difficult and still partly undefined fields concerns copyright. Traditionally scholarly publishing has extensive rules about copyright, citation practice, and attribution (the difficulty of getting photo permissions notwithstanding). The web is a virtual wild west of fair use, open access, and rights statements. These issues are being litigated as we speak with several major legal decisions occurring this week.

Scholars need to think about how they want to publish their scholarly output. The AHA recommends embargoing a dissertation from publication even behind a paywall (ProQuest’s dissertation database). Even when looking for “Open Access” options, there are several varieties from which to choose.  I’ve published a shortened version of my M.A. Thesis freely online at Academia.edu and have been very please with the feedback, hits, and conversation which it garnered. It was a difficult “leap” for me to publish online, but successful. Even in the current format- no cost to me, no cost to readers, I’ve heard valid criticism of Academia.edu as being owned by “venture capitalists.” That it is “Facebook for Academics” might be a bug or a feature, the jury is still out.

Despite the complex landscape of digital publishing that is continually evolving, scholars need to pay just as close attention to evaluation, ownership, and distribution of our work. Without intelligent publication of scholarship, digital history misses one of its defining legs and our research and teaching teeter ever more precariously. We must emphasize the strengths and depths of the digital medium without losing the goals of traditional publishing: transmitting ideas, starting conversations, and earning scholarly reputation.

Learning Through Creating

Mills Kelly has a simple argument. We learn best by doing. This is not what most people want to discuss (or argue/yell/::shake fists::) when they hear the concept behind “Lying About the Past,” his reimagining of the historical methods class. They want to talk about ethics and the historian’s version of the Hippocratic Oath, “tell no lies” (some historians take the Doctor thing very seriously). But they miss some of the important innovations and successes that digital pedagogy brought to Dr. Kelly’s course. The students were extremely motivated, they willingly visited archives and sought direction from more senior scholars and librarians, and they created history. It may be helpful to investigate how his class succeeded because of the web, and not incident to it.

One characteristic unique to the digital is the low barrier to creation and remix. With blogs, Twitter, Tumblr, and Wikipedia, it was very easy for the students to create public content. Instead of term papers created to be eventually filed away, students could unleash their work on a real audience. It is somewhat thrilling to have your work viewed by people you never anticipated and to receive feedback, good or bad. Another aspect of the digital is that it flattens traditional authorities. Mills class learned best when they attacked specific problems in small groups, not necessarily directed by the Professor. Many of the book’s pedagogical techniques came from surprising ways that students interacted with the course material.

Despite the focus on digital techniques, “analog” history was not forgotten and students had to supplement digital knowledge with traditional secondary and primary sources from a variety of repositories. Digital historians know that the librarian or archivist is still a crucial collaborator. While they supplemented their digital work with analog sources, they also “remixed” these existing digital sources for their own purposes, another characteristic of digital culture.

Because one can never quite cover every new technology in the rapidly changing digital field, Kelly’s book did not cover one of the major developments in digital pedagogy, Massive Open Online Courses (MOOCs). When I first heard about MOOCs, I figured Professors in digital history and at CHNM would be wildly in favor of them. They seemingly check many boxes that the Center is interested in promoting. Open and Free? Check.  Massively promotes history to popular audiences? Eh, mostly check. Encourages University-level learning? In a way…

So MOOC platforms never fulfilled their lofty promises even from the first iterations. But I didn’t understand the resistance to MOOCs until I looked at how they used the “digital”: top-down directed learning and rote evaluation with little human feedback. The primary strength of MOOCs is their efficiency of scale- more appealing to venture capitalists than educators. Unlike MOOCs, Mills used the digital in ways to promote learning rather than distribution. First, the digital allowed for easy creation of content and feedback. Second, learning partly happened through non-hierarchical communities and with collaboration. Third, the class reused digitized sources in new ways while also adding “analog” information from archives and libraries to the web.

As digital historians, we frequently argue that historians need to “meet the public” on the web where it lives. But we must also remember to use the particular strengths of the web to match our pedagogical aims. Trying to shoehorn previous ways of teaching into a digital format is bound to fail.

Finding Your Audience

When I first began to read into the practices of “Digital Public History,” I focused on works, like Carl Smith’s, asking whether “serious” public history on the web was possible. But this is the wrong question because it is directed AT audiences rather than emanating from them. It’s from an authoritative place of the expert. “We’re from the University and we’re here to help” was a quote I loved from Steven Lubar’s Seven Rules for Public Humanists (quoted by Sheila Brennan and others) and accurately describes this attitude. Web audiences enjoy being told what to do even less than physical audiences.

Better is the approach of CHNM in  creating the Histories of the National Mall project. They thought deeply about their audience and tested extensively with who might approximate this target audience. Lo and behold, when the project was the audience was there (though I haven’t seen actual traffic data, the project has received critical acclaim)! This is not all that different from what museum spaces need to do in advance of an exhibition opening. Frequent interaction between audience and museum helps both realize more out of a relationship. Museums receive more engagement and audiences receive better experiences.

In the digital realm, it is even more crucial. Mark Tebeau’s project on Cleveland History through oral interviews is a great example of audience directed design. Like the Mall History, Tebeau allowed his audience to tell the stories that they felt were most important about individual neighborhoods or events. Through the sophisticated mapping mechanism, the project’s users can enjoy the stories located geographically in space. This is such a robust and powerful community history project that it almost makes me want to go to Cleveland!

Finally, many commentators, including Melissa Terras, Sheila Brennan, and Tim Sherratt, have described the unintended audiences that the web brings. It’s important to design and test based on your user stories, but the third important part of design for web audiences is to iterate.  Your project will inevitably have either unexpected uses or unforeseen problems. The most successful projects take this feedback and use it to engage with users more successfully. This is nothing new, it’s what public historians have been doing for decades and it’s even more important when the history is digital.

 

Data Takes Command

This week continued a theme that we have focused on over the past three weeks: the digital is fundamentally different from physical history and we ignore those differences at our peril. As I learned about search, databases, and the new ways in which historians work, I became a little more pessimistic about this whole digital history thing. Patrick Spedding, Caleb McDaniel, Lara Putnam, and Marlene Manoff all criticized some of the unexamined practices of historians using digital tools. Whether these were poorly OCRed full text databases used for keyword searching, inexact and unreproducable searches, or the troubling implications of digital methods in general, the digital offers as many pitfalls as peaks.

Theoretically, as someone versed in the new digital methods, I will be aware of these shortcomings, think more deeply about what is happening “inside the box,” and understand when a digital method is less appropriate than a traditional close reading. James Mussel describes some of these digital best practices for historians- connect directly to APIs and you’ll know exactly what sort of search parameters you’re using as well as allowing other scholars to replicate your search. But Jennifer Rutner and Roger Schonfeld’s Ithaka S+R report on research practices indicated that historians fell much more into the former camp of inexact keyword searching, ending up with piles of unidentified archival photos (not a new phenomenon), and a mass of citations. The unorganized hard drive has replaced the unorganized filing cabinet.

And I cannot claim to be much better or more organized, especially when it comes to databases. As part of my day job as a public historian, I’m constantly creating databases. Some are as simple as excel spreadsheets for research notes, to large proprietary databases of digital documents or resource figures. Yet, through Stephen Ramsay’s article (not too much has changed in the past decade regarding relational database philosophy) and especially Mark Merry’s online course I learned many crucial organizing factors and philosophies behind relational databases, from method vs. source-driven design, to entity relationships.

Lev Manovich argued that while new media and databases can approximate or represent physical real world objects or media, it should not be forgotten that this representation is built on top of software. What we need to remember as we create new research methodologies is that not only are we constantly interfacing with software and data, not paper documents, but that these interactions are directed by the digital-industrial complex: Proquest, Elsevier, and Thomson Reuters. Until enough scholars demand API endpoints, transparent and stable databases, and disclosure about OCR accuracy, these keepers of the digital documents will be less likely to listen.

Creative Commons History

The following essay argues that public media (NPR, ProPublica, etc.), should publish more material with a Creative Commons license. NPR receives much of its funding from member stations who pay subscriptions to broadcast programs, so using CC licenses for all output would gut their revenue. It makes sense, however, to get as many eyeballs on breaking news or investigative pieces, because these are worth more in reputation and prestige.

Our academic work can also fall in this category because it is ostensibly worth more to receive the prestige and reputation from having your ideas spread widely than in keeping them on your own site. There is still a time and place for submitting to peer-reviewed journals and trying to publish your dissertation with a University Press. The particular prestige of this type of scholarly communication is unique from that provided by the crowd. I’m just suggesting that your witty 1000 word piece on a topical subject might have an unintended life if you set it up correctly.

Also, I wanted to exercise my internet right to post a CC BY-NC-ND 4.0.

Why Some Public Media Content Should Be Creative Commons Licensed

By Melanie Kramer

This essay is published under a Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. Feel free to copy and redistribute it, with attribution and a link back to the original source.

In February 2015, NPR’s Danny Zwerdling published a four-part investigative series on nurses who had been injured on the job . The series was published on the NPR website and distributed using NPR’s API to member station websites.

On the NPR homepage, Danny’s investigative work was given the same treatment as other NPR and member station pieces published on the same day. What this means is that a quick news spot from The Two-Way or a human interest piece on ice sculptures appeared on the NPR homepage for roughly the same length of time as Danny’s work, which took over a year to put together. If you took a glance at the NPR homepage several days later, you were likely to miss Danny’s work altogether.

NPR does have an API but it stipulates that use is for “personal, non-commercial use, or for noncommercial online use by a nonprofit corporation which is exempt from federal income taxes under Section 501(c)(3) of the Internal Revenue Code.”

This means Danny’s lengthy investigation could not be published by a for-profit alternative weekly or by a nursing organization. It could not be picked up by a wire service or distributed by a foreign news organization. Danny’s piece has won awards and inspired local stations to localize coverage but did it have the greatest possible impact it could have had online?

I was thinking about Danny’s pieces when I saw that ProPublica encourages anyone to steal its stories, which are published under a Creative Commons license and frequently picked up by other publications. As Richard Tofel and Scott Klein write:

In the last year, republication of ProPublica stories under our CC license has increased dramatically. Through November, we’ve recorded more than 4.2 million page views this year for authorized reprints of our work, which is up 77 percent over the same period in 2011, and is the equivalent of an additional 29 percent on top of the traffic to our own web site.

Among the literally thousands of sites that have reprinted ProPublica stories in 2012 alone are Ars Technica, the Atlantic Wire, CBS News, the Charlotte Observer, the Chronicle of Higher Education, the Cleveland Plain Dealer, Foreign Policy, the Houston Chronicle, the Huffington Post, the Las Vegas Sun, the Los Angeles Times, the Miami Herald, MinnPost, Minnesota Public Radio, Mother Jones, MSNBC, Nature, NBCNews.com, the Newark Star-Ledger, the New Haven Register, the New York Daily News, Nieman Journalism Lab, the San Jose Mercury News, Scientific American, the Seattle Times, Slate, Talking Points Memo, the Tampa Bay Times, the Trentonian, USA Today, the Utne Reader, Wired and Yahoo News.

Why do we do this? ProPublica’s mission is for our journalism to have impact, that is for it to spur reform. Greater reach – the widest possible audience – doesn’t equate to impact, but it can help, and certainly doesn’t hurt. So we encourage it. And, of course, we started in 2008 with almost no audience or reputation at all, and needed – and still need – to increase the circle of people who know us, and our work. CC helps us achieve that goal.

ProPublica also makes sure it can track the impact of any stories that are distributed under the Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons license. This is really important, because it allows ProPublica to keep track of how their stories are being disseminated and include numbers from their republished material in reports that go out to funders and their Boards. As Tofel and Klein note:

We created a simple JavaScript beacon that we call Pixel Ping. We designed it, working with developers at DocumentCloud, to be as lightweight as possible and so that it doesn’t violate, either in spirit or letter, the privacy policies of the sites that republish our work. Pixel Ping simply counts the number of times our stories are read on sites that republish them. It doesn’t collect any information about visitors, and it neither sets nor reads browser cookies. It’s open source, too.

I am not advocating that NPR or member stations release all of their content under a Creative Commons license. This would destroy NPR’s business model, which is currently based, in part, on member stations’ paying dues to broadcast their programming. But for breaking news, which often garner a national audience, and investigative pieces — which are designed to have impact and spur change – doesn’t it make sense for the stories to reach the biggest audience possible?

I say breaking news because member stations are often providing in-depth, well-reported coverage of local, breaking news events that are of interest to national audience — and don’t always receive the audience they should. Take, for instance, St. Louis Public Radio’s ongoing coverage of Ferguson. It is well-reported, in-depth, and covers all of the angles.

If St. Louis Public Radio offered up its Ferguson coverage under a restricted creative commons license with the same stipulations that ProPublica has, then other stations — and even other news organizations — could have spread St. Louis’s coverage beyond the audience of its website. People from around the country would have learned about St. Louis’s coverage in this way, and potentially wanted to support the station for providing some of the best coverage around. St. Louis would have been able to track its reach and its impact in a different way.

For breaking news pieces that attain national significance and for investigative pieces that require a lot of time, capital, and labor, it makes sense to think of the audience beyond public media’s walls online. And it fits within the mission of public media, which is to provide programs and services that inform, educate, enlighten, and enrich the public – wherever that public may happen to be located.