Lev Manovich—a Computer Science professor and practitioner at the Grad Center who writes extensively on new media theory—delivered a guest lecture on visualization and its role in cultural analytics and computing on 9/23.
Basing his discussion on a range of visualization examples from the last decade or so, Lev highlighted how the rapid emergence of tools for collecting data and writing software have allowed artists, social scientists and others to investigate and question:
- the role of algorithms in determining how technology mediates our cultural and social experiences,
- how to work with very large datasets to identify social and cultural patterns worth exploring,
- the role of aesthetics and interpretation in data visualization projects,
- and how visualization projects can put forth reusable tools and software for working with cultural artifacts.
He also discussed previous and future projects undertaken by his lab, which developed at the University of California San Diego, and is now migrating to the CUNY Graduate Center.
Class discussion following the lecture highlighted the value of transparency in Lev’s work and processes—a value he affirmed has always defined his own publishing philosophy, even before he began writing software.
Another line of inquiry was based on how machines can be programmed to automatically “understand” content. A current challenge lies in developing computational methods that can make meaningful assessments of complex, contextualized objects. For instance, how do we train machines to go beyond simply recording strings of characters or groups of pixels (the kinds of data computers are fundamentally good at collecting), and instead write programs that have the potential to generate insights about types of sentences or faces? What is the role of visualization in meeting this challenge and how is it different than other scientific methods, like applying statistics to big data?
Lev Manovich’s discussion of Data Visualization was an interesting conversation about taking large data skill sets and them using them to spark cultural analytics. What I thoroughly enjoyed about this discussion was the aesthetic value of information as it was visually displayed to identify patterns or areas of further exploration. To better define what this is, data visualization is the process by which a large amount of data is drawn or mapped out so that it may have some kind of visual value. Now there are various ways that this can be done, some ways may look like graphs of information, while others may look like abstract pieces of art, others may look like large collages of images (data visualization of visual data). But no matter which way this it is illustrated, it is still very data dense.
In regards to data visualization of visual data Manovich clarifies that “Our approach is to use visualization as a new descriptive system. In other words, we describe images with images.” By looking at certain characteristics of images in video games or manga, pages are then mapped out based on their gray-scale value. Different types of graphic novels then laid out next to each other show some striking differences in image types, content, and gray scale.
Although the topic is much larger than these simple explanations I’ve attempted to provide, one of the most fascinating aspects of it, in my opinion was the discussion of whether data visualization was just a pretty picture or a useful tool. In the field of humanities, I believe data visualization is both and moving forward with our discussion, that is how we must view it. It is a tool to provide a visual element and it should be viewed like a piece of art in its presentation; however it should also not be so abstract that no data can be extracted from it. The point is to use data visualization to further the understanding and see the patterns or differences within a certain data set. It is to expand ones view and provide different angles for analysis.