I hope yesterday’s presentation was helpful. The students had some good questions. As I mentioned, I’d be glad to follow up with them individually if they have more specific questions or want to discuss options.
Btw, I was reminded today that cartoDB has started to offer online tutorials for beginners. More info here: http://cartodb.com/academy The first session already took place, but they’ll have others and the material will be archived at that link. Please pass along to your students if you think it’d be helpful.
Steven Romalewski, Director,
CUNY Mapping Service
Center for Urban Research at The Graduate Center / CUNY
While many graduate programs continue to focus on tenure track placement rates, a growing proportion of humanities scholars are embracing a much broader range of intellectually stimulating careers in, around, and beyond the academy. Focusing both on her own career path and on her research at the Modern Language Association, the Scholarly Communication Institute, and the Scholars’ Lab at the University of Virginia, Katina Rogers will discuss strategies to support professionalization, public scholarship, and career development across a wide array of possible outcomes.
Katina Rogers is managing editor of MLA Commons, the Modern Language Association’s new online platform for collaboration and scholarly communication. She previously served as Senior Research Specialist with the Scholarly Communication Institute, a Mellon-funded humanities think tank based in the University of Virginia’s Scholars’ Lab. Her current research focuses on graduate education reform, career paths for humanities scholars, and innovative modes of scholarly production. Katina holds a Ph.D. in Comparative Literature from the University of Colorado.
This talk explores elements of the scholarly edition in the context of new and emerging social media from two pertinent perspectives: the first from the foundational perspective of its theoretical context, particularly as that context intersects with a utility-based consideration of the toolkit that allows us to consider the social edition as an extension of the traditions in which it is situated and which it has the potential to inform productively; the second is from the perspective of an iterative implementation of one such edition, A Social Edition of the Devonshire MS [BL Add MS 17,492] (http://en.wikibooks.org/wiki/The_Devo…), carried out via a research team operating in conjunction with an advisory group representing key expertise in the methods and content-area embraced by the edition.
Ray Siemens (http://web.uvic.ca/~siemens) is Canada Research Chair in Humanities Computing and Distinguished Professor in the Faculty of Humanities at the University of Victoria, in English and Computer Science, and visiting professor at NYU in 2013. He is founding editor of the electronic scholarly journal Early Modern Literary Studies, and his publications include, among others, Blackwell’s Companion to Digital Humanities (with Schreibman and Unsworth), Blackwell’s Companion to Digital Literary Studies (with Schreibman), A Social Edition of the Devonshire MS, and Literary Studies in the Digital Age (MLA, with Price). He directs the Implementing New Knowledge Environments project, the Digital Humanities Summer Institute and the UVic Electronic Textual Cultures Lab, and serves as Vice President of the Canadian Federation of the Humanities and Social Sciences for Research Dissemination and Chair of the Modern Language Association’s Committee on Scholarly Editions, recently serving also as Chair of the international Alliance of Digital Humanities Organisations’ Steering Committee.
A talk with Kathleen Fitzpatrick sponsored by the CUNY Digital Humanities Initiative and the Digital Praxis Seminar at the CUNY Graduate Center November 4, 2013.
Recent experiments in open peer review, as well as a recent study of open review practices jointly conducted by MediaCommons and NYU Press, suggest that online scholarly communication may be changing the nature of the “peer,” as well as the shapes of scholarly communities. This presentation will explore the history and future of peer review as a means of thinking through the issues that open review raises for communities of practice online.
Kathleen Fitzpatrick is Director of Scholarly Communication of the Modern Language Association and Visiting Research Professor of English at NYU. She is author of Planned Obsolescence: Publishing, Technology, and the Future of the Academy (NYU Press, 2011) and of The Anxiety of Obsolescence: The American Novel in the Age of Television (Vanderbilt University Press, 2006). She is co-founder of the digital scholarly network MediaCommons, where she has led a number of experiments in open peer review and other innovations in scholarly publishing.
Matthew Kirschenbaum spoke about his forthcoming book project, which was recently profiled in The New York Times.
Kirschenbaum’s research asks questions such as: When did writers begin using word processors? Who were the early adopters? How did the technology change their relationship to their craft? Was the computer just a better typewriter—faster, easier to use—or was it something more? And what will be the fate of today’s “manuscripts,” which take the form of electronic files in folders on hard drives, instead of papers in hard copy? This talk, drawn from the speaker’s forthcoming book on the subject, will provide some answers, and also address questions related to the challenges of conducting research at the intersection of literary and technological history.
Matthew G. Kirschenbaum is Associate Professor in the Department of English at the University of Maryland and Associate Director of the Maryland Institute for Technology in the Humanities (MITH, an applied thinktank for the digital humanities).
On October 8, CUNY DHI and the Graduate Center Composition and Rhetoric Community (GCCRC) hosted a conversation about the intersection of writing studies and digital humanities with Doug Eyman and Collin Brooke. These two innovative scholars shared in an important discussion concerning the future of digital rhetoric. Doug Eyman is a professor of digital rhetoric, technical and scientific communication, and professional writing at George Mason University and the senior editor of Kairos: A Journal of Rhetoric, Technology, and Pedagogy; Collin Brooke is a professor of Rhetoric and Writing at Syracuse University and is the author of Lingua Fracta: Towards of Rhetoric of New Media.
Lev Manovich—a Computer Science professor and practitioner at the Grad Center who writes extensively on new media theory—delivered a guest lecture on visualization and its role in cultural analytics and computing on 9/23.
Basing his discussion on a range of visualization examples from the last decade or so, Lev highlighted how the rapid emergence of tools for collecting data and writing software have allowed artists, social scientists and others to investigate and question:
the role of algorithms in determining how technology mediates our cultural and social experiences,
how to work with very large datasets to identify social and cultural patterns worth exploring,
the role of aesthetics and interpretation in data visualization projects,
and how visualization projects can put forth reusable tools and software for working with cultural artifacts.
He also discussed previous and future projects undertaken by his lab, which developed at the University of California San Diego, and is now migrating to the CUNY Graduate Center.
Class discussion following the lecture highlighted the value of transparency in Lev’s work and processes—a value he affirmed has always defined his own publishing philosophy, even before he began writing software.
Another line of inquiry was based on how machines can be programmed to automatically “understand” content. A current challenge lies in developing computational methods that can make meaningful assessments of complex, contextualized objects. For instance, how do we train machines to go beyond simply recording strings of characters or groups of pixels (the kinds of data computers are fundamentally good at collecting), and instead write programs that have the potential to generate insights about types of sentences or faces? What is the role of visualization in meeting this challenge and how is it different than other scientific methods, like applying statistics to big data?