Tag Archives: LevManovich

9/23/13 Cultural analytics: A guest lecture by Lev Manovich

Lev Manovich—a Computer Science professor and practitioner at the Grad Center who writes extensively on new media theory—delivered a guest lecture on visualization and its role in cultural analytics and computing on 9/23.

Basing his discussion on a range of visualization examples from the last decade or so, Lev highlighted how the rapid emergence of tools for collecting data and writing software have allowed artists, social scientists and others to investigate and question:

  • the role of algorithms in determining how technology mediates our cultural and social experiences,
  • how to work with very large datasets to identify social and cultural patterns worth exploring,
  • the role of aesthetics and interpretation in data visualization projects,
  • and how visualization projects can put forth reusable tools and software for working with cultural artifacts.

He also discussed previous and future projects undertaken by his lab, which developed at the University of California San Diego, and is now migrating to the CUNY Graduate Center.

Class discussion following the lecture highlighted the value of transparency in Lev’s work and processes—a value he affirmed has always defined his own publishing philosophy, even before he began writing software.

Another line of inquiry was based on how machines can be programmed to automatically “understand” content. A current challenge lies in developing computational methods that can make meaningful assessments of complex, contextualized objects. For instance, how do we train machines to go beyond simply recording strings of characters or groups of pixels (the kinds of data computers are fundamentally good at collecting), and instead write programs that have the potential to generate insights about types of sentences or faces? What is the role of visualization in meeting this challenge and how is it different than other scientific methods, like applying statistics to big data?

The Science/Humanities Gap

A few of the DefiningDH blogs have touched on the disparity between/problem of digital research methods in the sciences and humanities, and how humanists can use technology in their work. Here is a recent NY Times article I stumbled across on this:


Without mentioning Digital Humanities per se, the author (who is responding to another interesting article about how humanists MUST embrace the sciences) believes humanists are well aware of this gap:

Pinker notes the antiscientific tendencies of what he calls “the disaster of postmodernism, with its defiant obscurantism, dogmatic relativism, and suffocating political correctness.” But literary studies, the bastion of these tendencies, have long been moving in other directions, including a strong trend toward applying scientific ideas and methods. There is, for example, the evolutionary and neurological study of literature and, most recently, the use of computer data-mining.

There is, then good reason to think that the greater problem is scientists’ failure to attend to what’s going on in the humanities.

In the readings this week, Lev Manovich poses a similar problem in relation to data access and interpretation:

I have no doubt that eventually we will see many more humanities and social science researchers who will be equally as good at implementing the latest data analysis algorithms themselves, without relying on computer scientists, as they are at formulating abstract theoretical arguments. However, this requires a big change in how students in humanities are being educated.

Manovich leaves this question open-ended, and it’s a big one. Both authors seem to be bothered by disciplinary narrowness and a lack of cooperation across disciplines.

I don’t know about anyone else, but part of the reason I was attracted to Digital Humanities was the fact that many of my research and teaching questions can’t be answered by taking more Literature classes.