Welcome! This is the home for all things related to evaluation at Public Lab. Many different feed...
Public Lab is an open community which collaboratively develops accessible, open source, Do-It-Yourself technologies for investigating local environmental health and justice issues.
56 CURRENT | liz |
July 18, 2019 14:24
| over 5 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! See recent work related to evaluation here, and ask questions below to find out more. What are we measuring towards?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. The creation of this Logic Model, and the Snapshot Evaluation and Evaluation Framework based on it was generously supported by the Rita Allen Foundation (May 2015-May 2018), with additional support from the Listen For Good Project. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
55 | liz |
July 18, 2019 14:24
| over 5 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! See recent work related to evaluation here, and ask questions below to find out more. What are we measuring towards?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. The creation of this Logic Model, and the Snapshot Evaluation and Evaluation Framework based on it was generously supported by the Rita Allen Foundation (May 2015-May 2018), with additional support from the Listen For Good Project. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
54 | liz |
July 18, 2019 14:20
| over 5 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation What are we measuring towards?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
53 | liz |
July 18, 2019 14:20
| over 5 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board What are we measuring towards?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
52 | liz |
January 03, 2019 21:58
| almost 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
51 | liz |
January 03, 2019 21:45
| almost 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
50 | liz |
January 03, 2019 21:44
| almost 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysFormerly, a one-size-fits-all Annual Community Survey** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
49 | warren |
October 03, 2018 17:59
| about 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. Why we evaluateThe Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
48 | liz |
June 08, 2018 16:03
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
47 | liz |
May 11, 2018 14:47
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Other interesting views of the Public Lab community over time
Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
46 | liz |
May 11, 2018 12:52
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsStatistics on community activity are publicly displayed at http://publiclab.org/stats. The ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
45 | liz |
May 10, 2018 21:26
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
44 | liz |
May 10, 2018 21:26
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. Topics include: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
43 | liz |
May 10, 2018 19:56
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
Rhythms of community activity on publiclab.org: User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
42 | liz |
May 10, 2018 19:50
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
41 | liz |
May 10, 2018 19:50
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee all recent work related to evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
40 | liz |
May 10, 2018 19:49
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee all recent work tagged evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
39 | liz |
May 10, 2018 19:49
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee all recent work tagged evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and our terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
38 | liz |
May 10, 2018 19:48
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee all recent work tagged evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and our terms in Logic Model are defined in our Community Glossary. How are we measuring?Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert | |
37 | liz |
May 10, 2018 19:46
| over 6 years ago
Welcome! This is the home for all things related to evaluation at Public Lab. Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! Ask questions below to find out more. Follow along with current workSee all recent work tagged evaluation here, or click on the boards below to see progress on: 1) our Snapshot Evaluation and Evaluation Framework (May 2015-May 2018) generously supported by the Rita Allen Foundation Trello Board Trello Board Trello Board Here's a link to the staff board for developing year-round evaluation tasks, being integrated into the third Trello board above: https://app.asana.com/0/645629000328698/board What are we measuring against?All evaluation is tracked against our Logic Model, and our Logic Model is based on our Community Glossary. Community SurveysOur Annual Community Survey is delivered over email lists and posted on the website.
Stakeholder interviewingA series of stakeholder interviews was done in 2017! You can read them here: [notes:series:community-interviews] Online analyticsThe ever-growing Data Dictionary describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz. TopicsConversational dynamics on mailing lists:
User interface designSee the User Interface page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback! Questions[questions:evaluation] Related work[notes:evaluation] Older page contentFrom 2014 via @liz: brainstorming possible community metrics From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account:
GoalsGood evaluative approaches could enable us to:
ApproachesWe're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. Approach A: Logbook questionnaireThe logbook is an idea for a Lulu.com printed book to bring on field mapping missions for balloon mapping. Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the Logbook page. A mini version of this questionnaire was used by Jen Hudon as part of her Grassroots Newark project and can be found here: Approach B: Community BlogThe community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the Community Blog page Approach C: InterviewsWe're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the interviews page. |
Revert |