Lev Manovich—a Computer Science professor and practitioner at the Grad Center who writes extensively on new media theory—delivered a guest lecture on visualization and its role in cultural analytics and computing on 9/23.
Basing his discussion on a range of visualization examples from the last decade or so, Lev highlighted how the rapid emergence of tools for collecting data and writing software have allowed artists, social scientists and others to investigate and question:
- the role of algorithms in determining how technology mediates our cultural and social experiences,
- how to work with very large datasets to identify social and cultural patterns worth exploring,
- the role of aesthetics and interpretation in data visualization projects,
- and how visualization projects can put forth reusable tools and software for working with cultural artifacts.
He also discussed previous and future projects undertaken by his lab, which developed at the University of California San Diego, and is now migrating to the CUNY Graduate Center.
Class discussion following the lecture highlighted the value of transparency in Lev’s work and processes—a value he affirmed has always defined his own publishing philosophy, even before he began writing software.
Another line of inquiry was based on how machines can be programmed to automatically “understand” content. A current challenge lies in developing computational methods that can make meaningful assessments of complex, contextualized objects. For instance, how do we train machines to go beyond simply recording strings of characters or groups of pixels (the kinds of data computers are fundamentally good at collecting), and instead write programs that have the potential to generate insights about types of sentences or faces? What is the role of visualization in meeting this challenge and how is it different than other scientific methods, like applying statistics to big data?