Public Lab Research note


A not-entirely-negative reflection on sensor journalism (Emerson Data Viz)

by ElizabethGillis | October 06, 2014 16:06 06 Oct 16:06 | #11239 | #11239

by Elizabeth Gillis (Emerson Data Visualization Group 1)

This semester I attempted sensor journalism for the first time. The process was a lot like reporting. I had to visit a site, collect data, observations and pictures. Then, the data and observations were analyzed and compiled in a way that could communicate, something -but the problem was that I’m not sure what exactly the data told me.

I think this problem can be first explained by the fact that we didn’t really have a question that needed answering. At least, we didn’t have a journalistic question. Emerson’s Data Visualization class spread out in groups to collect water samples from all around Boston. We knew when the samples were being collected that we were going to test water conductivity, and that, because high conductivity means something is dissolved, we could see which bodies of water would have something dissolved in them.

There are a few problems with the way we approached this project. First of all, the question of conductivity, itself, does not tell a consumer of news much about their water. A successful sensor journalism project needs a clear reason and goal in collecting the data. What do we want to know that conductivity can tell us? The what do we want to know part should come before the implementation of conductivity. Maybe our question would be better answered by testing for phosphates or salinity.

The most common example of sensor journalism is when the Associated Press gave journalists at the Beijing Olympics air quality sensors. They were looking to answer a clear question: is the air quality bad in urban China? The answer to this question could easily be turned into a story. Either the air quality is actually better than we thought in this highly polluted part of the world, or, as a the headline would read, the air quality is below minimum standards. The experiment could be turned into a story, but that’s not to say this project was perfect.

First of all, cheap, portable air quality sensors, like those used in Beijing have come under scrutiny recently. The Courier-Journal talked to researchers at 2 universities using these sensors.

They explain one of the many positives about collecting data this way: The sensors can be used by people all over the city, allowing researchers to collect more data from more places. They also own the data and can update it instantaneously for the public to analyze on their own. But both in Louisville and Boston, researchers say the sensors just aren’t as good or reliable as professional data collection.

"All of the readings you have to take with a big grain of salt," said Boston College professor Michael Barnett in the article titled “Personal Air Monitors Less Useful than Hoped.” "Eventually, these are going to get better. It's just a matter of time."

The problem here is in making science user friendly. Scientific experiments are traditionally proven by professionals for a reason. The process involves understanding the complexities of what data means and how it has to be collected to allow comparison. If you are using different instruments they must all be calibrated to the same scale, otherwise the data can’t be compared. And the directions for using the instrument and collecting data have to be extremely clear.

The New York Times published a video in June called Microsampling Air Pollution. The story is about smartphone-enabled air sensors that are supposed to allow citizen scientists to take control of their community health monitoring.

In the video, Dr. Iem Heng at Citytech Mechatronics Center holds out the portable, air quality sensor in one hand and his iPhone in the other.

“Everyone has a smartphone,” he says. “The one thing that’s different from our field to the lay people out there is, if you design something, here’s the product how do you merge them together?”

In our experiment, we did end up using one sensor to for the mapped results, however with the samples from 2 groups, this sensor had an extra capacitor. The frequency of a couple samples were out of range. Not only can our results not be compared to those of someone using a different Coqui, but the sounds from these 2 groups cannot be compared to the other samples collected by the class.

But I don’t bring up these problems to put down the idea of sensor journalism. One of the tenants of sensor journalism is free and open data for everyone, and that is not a bad idea. But when data is shared with everyone by everyone, its even more important that it mean something and not be misleading.

Well gathered and analyzed data is a definitive way to understand how individuals interact with their environment. With sensor journalism, upholding some scientific process is just as important as investing in the collection of data. At the inception of any kind of data collection a journalist needs to have a clear question they are answering.

Then, they need to identity where their collection is taking place. This is also something we neglected to do in our classroom sensor journalism experiment. What does comparing the conductivity of a freshwater puddle and a saltwater river tell you? It’s easy to see they will have different conductivities. The river sample will have a higher conductivity maybe because of salinity, a thriving and complex ecosystem or because it travels. A scientists would tell you that there are too many variables.

Now let’s try a different approach. Let’s say we wanted to compare puddles in one neighborhood depending on how long they last after a rainstorm. The question may be, do puddles that take longer to evaporate after a rainstorm retain more solids over time?

Here we have a specific question, sample area and plan for how the samples are going to be taken. Now as mentioned before the collection process and instruments used have to be regulated. The conductivity testers need to be regulated. The rainfall should be collected at each location, along with puddle size and depth (at each reading). Every sample should be accompanied with observations on what is around and effecting the puddle, etc… If different people are collecting the data, this process needs to be clear and accessible to them. The risk for not doing this is reporting inaccurate data.

Now there is another part to this hypothetical experiment that should be considered from the onset. How can I, as a journalist, use this information? For every experimental question, there should be a corresponding story driven question you are trying to answer. For this experiment, it may be why should the city invest in drainage and fill in potholes in this neighborhood?

Data collection to achieve a conclusion is one way that sensor experiments are useful to journalists. A simpler application is simply to use sensor journalism to create content that promotes user engagement. If sensors are cheap enough and sold a lot like publication merch they can be a tool for news organizations to involve the people consuming news. WNYC did this with their Cicada Tracker in May 2013. The story was already there. Cicadas were going to emerge after 17 years, and a cheap tool existed to track them. The data is obviously limited. Its constrained to places where you can hear WNYC and was only collected by people who both listen and wanted to take part in the experiment. But the data wasn’t meant to draw any real conclusions. The point of collecting the data is to see how what you’re hearing compares to your neighbors. Its a means of interacting with the story.

In that way, sensor journalism can get more people interested in stories that used to lay flat on the page. Multimedia content values seeing and hearing the data, something that can be provided by these Conqui sensors, for example. If sensor journalism is applied to issues of local pollution and climate change for example, sensor journalism could inspire the kind of action journalists wish for with every story. But first, the process.


0 Comments

Login to comment.