**Welcome! This is the home for all things related to evaluation at Public Lab.** Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! See recent work related to evaluation [here](/tag/evaluation), and ask questions below to find out more. ## What are we measuring towards? [![LogicModel_headers.jpg](/i/24885)](/i/24885) **All evaluation is tracked against our [Logic Model](https://docs.google.com/document/d/1N2GnPoe2gaqb5eMWnJOdcdheSmL6KzbkIHUCeTMup4U/edit), and terms in Logic Model are defined in our [Community Glossary](/glossary).** The creation of this Logic Model, and the Snapshot Evaluation and Evaluation Framework based on it was generously supported by the Rita Allen Foundation (May 2015-May 2018), with additional support from the Listen For Good Project. ## Why we evaluate The Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? **** ## How are we measuring? ### Community Surveys Formerly, a one-size-fits-all [Annual Community Survey](/notes/liz/06-13-2017/your-input-kindly-requested)** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences. * **People attending live events, in person or remotely**: we use a modified Listen4Good template with Net Promoter Score questions as well as 6 customized questions about our mission and respondent demographics. Delivered via SurveyMonkey. * **Barnraisers**, example 2016_Annual Barnraising_Feedback_Survey_.pdf, and 2017_Regional_Barnraising_Survey.pdf * **Software Contributors**, example: [2017 survey](https://docs.google.com/forms/d/e/1FAIpQLSeMFVQ4NNcNRzIAwsWY1bZrrQDIeVh3s399h8dkPzVJ2I-pHA/viewform), delivered via Github * **[Organizers](/organizers)**, example: [2017 Survey as GoogleForm](https://docs.google.com/forms/d/1jrBEmxB6oAnoEixJysqrJaY-xjMLr4vGIwcdMvR3Jnk/edit) and 2017_Organizer_Survey.pdf, delivered via email and personal direct outreach * **Partnering Organizations** update their activity every year on [publiclab.org/partners](/partners) **** ### Stakeholder interviewing **A series of stakeholder interviews was done in 2017! You can read them here:** [notes:series:community-interviews] **** ### Online analytics **Statistics on community activity are publicly displayed at http://publiclab.org/stats.** Experiment with customizing your own queries of publiclab.org activity by adjusting the DD-MM-YYYY in the URL, for example → https://publiclab.org/stats/range/31-01-2016/31-12-2016/ ! **Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact**. **The ever-growing [Data Dictionary](https://docs.google.com/document/d/167Y-oW7oA4i9zwyr00uygQx7PaIU_uUJ2kVQU3lU6aE/edit#) describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz.** **Topics include: ** * **Conversational dynamics on mailing lists: ** [![2017-07-12_mailing_list_activity.png](/i/24878)](/i/24878) * **Rhythms of community activity on publiclab.org:** [![Screen_Shot_2018-05-10_at_2.59.16_PM.png](/i/24886)](/i/24886) ### User interface design **See the [User Interface](/ui) page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback!** ### Other interesting views of the Public Lab community over time * https://publiclab.org/community-development * https://publiclab.org/stories **** ## Questions [questions:evaluation] **** ## Related work [notes:evaluation] **** ### Older page content From 2014 via @liz: brainstorming [possible community metrics](https://docs.google.com/document/d/1ZnmTco7zaizEelP1awDuWSTZ_tNg9NSpqPKM9XbdO-c/edit) From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account: * multiple audiences - feedback for local communities, for ourselves, for institutions looking to adopt our data, for funding agencies, etc * reflexivity - we may work with local partners to formulate an evaluative strategy, and this may often include questionnaires, surveys, interviews which we take part in both as subjects and as investigators * outreach - by publishing evaluations in a variety of formats, we may employ diverse tactics to better understand and refine our work; its publication in diverse venues (journals, newspapers, white papers, video, public presentation, etc) offers us an opportunity to reach out to various fields (ecology, law, social science, technology, aid) * location - our evaluations should be situated in geographic communities, examining the effects of our tools and data production in collaboration with a specific group of residents ##Goals## Good evaluative approaches could enable us to: * quantify our data and present it to scientific, government agencies for use in research, legal, and * provide rich feedback for field mappers (in the case of [balloon mapping](/tool/balloon-mapping) and other public scientists to improve their techniques * assess the effects of our work on local communities and situations of environmental (and other types of) conflict * involve local partners in the quantification and interpretation of our joint work * ... ##Approaches## We're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. ##Approach A: Logbook questionnaire## The logbook is an idea for a Lulu.com printed book to bring on field mapping missions for [balloon mapping](/tool/balloon-mapping). Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the [Logbook](/wiki/logbook) page. A mini version of this questionnaire was used by Jen Hudon as part of her [Grassroots Newark](/wiki/grassroots-newark) project and can be found here: * [Draft questionnaire PDF](/sites/default/files/grassroots-mapping-questionnaire-draft-1.pdf) ##Approach B: Community Blog## The community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the [Community Blog page](/wiki/community-blog) ##Approach C: Interviews## We're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the [interviews page](/wiki/interviews)....
Author | Comment | Last activity | Moderation | ||
---|---|---|---|---|---|
bsugar | "Oh, didn't see your other comment. Both versions of Python are not necessary in the strictest sense. What I meant was that since I'll be using 2...." | Read more » | over 7 years ago | |||
bsugar | "Absolutely! " | Read more » | over 7 years ago | |||
icarito | "Also it would be very cool if we could share a Jupyter notebook of this ;-) " | Read more » | over 7 years ago | |||
icarito | "Hi Benjamin! Excited to see progress. What are you building? Also, curious as to why you think both versions of Python are necessary? I find Python..." | Read more » | over 7 years ago | |||
liz | "Thanks everyone for sending in 20 responses in the first 24 hours! I really appreciate it, it's a big help, and please keep them coming! " | Read more » | over 7 years ago | |||
frankrasmussen | "wow.. well done :) " | Read more » | over 7 years ago | |||
ananyo2012 | "Awesome! :) " | Read more » | over 7 years ago | |||
gkb | "BASIC INFORMATION Project Name Project Objective (qualitative, quantitative, both) Media to be measured (Air, Water, Land, combo, people’s opinions..." | Read more » | over 7 years ago | |||
cfastie | "This looks like a really good result that answers a lot of your original questions. I guess most people don't tag their notes with a location, so t..." | Read more » | over 8 years ago | |||
bsugar | "@bsugar here...I should add...despite being very aesthetically pleasing, this graph was created to help answer the questions "How many posts are to..." | Read more » | over 8 years ago | |||
debbieevans | "Based on the evaluation framework, there is a series of things to be done in the public laboratory for Metagenomics http://www.cd-genomics.com/Meta..." | Read more » | over 8 years ago | |||
wjw | "Am interested in the qualitative analysis methodology which will be considered for the evaluation framework. If I may suggest something: There is..." | Read more » | almost 9 years ago | |||
gretchengehrke | "I like this idea a lot, @DavidMack. One thing we've been talking about on staff is developing a sort of environmental monitoring guide, one sectio..." | Read more » | about 9 years ago | |||
DavidMack | "The topic is kind of overwhelming when you think of all the possibilities but maybe one model would be to start compiling how different METHODS wer..." | Read more » | about 9 years ago | |||
liz | "Thanks for bringing this together @gretchengehrke and @shannon and many others. This is going to really help the "Tool Page Standardization" effort..." | Read more » | about 9 years ago | |||
gretchengehrke | "@warren, I definitely agree that the idea of a centralized committee would be at odds with democratization of science. I was envisioning something..." | Read more » | about 9 years ago | |||
warren | "Great post. I have lots of thoughts on this, some of which we've discussed already, but I first wanted to say that in an open source, community sci..." | Read more » | about 9 years ago | |||
jeremyb | "please add a blank line before each of your ordered (numbered) lists (looks like there are 4 on this page) Thanks! " | Read more » | almost 11 years ago |