Early Career Researchers Use Multiple Approaches Collaboratively to Strengthen Resilience in Practice
Written by Madeline Sides and edited by Rafael Lembi Photos by Madeline Sides
This blog post is part of a series reflecting on a selection of sessions and keynotes that were presented at the PECS-III Conference, Montreal Canada, 12-15 August 2024
The session “Strengthening the theoretical basis of resilience in practice” was offered by a group of researchers affiliated with the Resilience Alliance as part of the 2024 PECS-III conference in Montreal. The session recapped recent efforts of these researchers to understand and address the challenges of applying resilience theories in resilience practice. The work presented in this session was produced through a series of workshops, in-person meetings, and virtual collaborations between a group of early career researchers supported by the V. Kann Rasmussen Foundation.
To start the session, Allyson Quinlan, Resilience Alliance Senior Research Fellow, provided a brief history of resilience assessment tools, which helped contextualize the subsequent presentations. Researchers then presented efforts to address a theory gap in resilience practice
using methods including co-creation of a Community of Practice, artificial intelligence model development, qualitative and quantitative assessment of pre-existing resilience guides, and design of a decision-support tool.
Clara Graucob from Michigan State University shared results of a systematic literature review of existing resilience assessment approaches. Clara specified that tools used in resilience assessments include interviews, focus groups, surveys, participatory spatial tools,
and participatory temporal tools. A major gap that Clara’s group identified through a literature review is the fact that many resilience assessment tools do not adequately describe the resources required to do the assessment such as financial and time resources. Further work in this group may produce a publication describing the state of the practice of resilience assessment.
Vitor Hirata Sanches from the Australian National University shared the recent work of a metrics-oriented working group that dug into metrics describing diversity and agency in resilience assessment practice. This group asked how often and in what ways quantitative
metrics in resilience assessment incorporate agency and diversity. Initial findings from the group’s systematic literature review shared by Vitor were that metrics vary widely by field of practice within the broad umbrella of resilience, metrics describing diversity are rare, and
metrics describing agency are even more rare.
Rubi Quiñones from Southern Illinois University Edwardsville brought a unique perspective to the discussion by framing resilience assessment as a data management and processing challenge. Rubi explained how artificial intelligence (AI) may be a useful tool in this
space for prediction analyses and pattern recognition. Rubi and collaborators conducted a literature review to understand how and if AI is being used in social-ecological systems research (SES). Rubi explained the importance of not only open-source data sets in this field of research but also open-source code bases. Looking forward, Rubi hopes to develop workshops with other computer science experts for practitioners in SES fields to learn how to develop custom AI tools for SES research.
Morgan MathsonSlee from Michigan State University offered reflections on developing a community of practice (COP) of early career researchers for this Resilience Alliance-affiliated effort. Morgan’s presentation broke the 4th wall of academic knowledge production by offering a unique “behind-the-scenes” reflection on the process that produced the work that other researchers in the session shared. In a field that encourages researcher reflexivity, Morgan’s presentation was a valuable contribution that exemplifies how this value appears in practice. Specifically, Morgan shared how the COP was facilitated online and some outputs from this community, including the “Sound Bites from a Community of Practice”.
Jennifer Hobod from the University of Leeds shared efforts to weave the previously- mentioned working groups together by asking how to pick resilience assessment tools. Jennifer drew on the work of the other groups, including the previously identified quantitative metrics, clusters of assessing assessments, and two types of AI. Jennifer shared an in-progress model of assessment approach typology and commented that this work may eventually become a decision-support tool.
This session offered a compelling conversation between several distinct, but related researcher voices. I particularly enjoyed seeing process photos and the bigger context of how this research network was collaborating. I appreciate the efforts of these researchers to address a community-wide need to make resilience assessment theories and tools more interpretable and usable to improve the quality of our collective work.