Networked Learning Loops Through Benchmarking | Beth’s Blog

Networked Learning Loops Through Benchmarking

Measurement, Training Design, Transparency

In the Networked Nonprofit “Learning Loops, we  illustrated how networked nonprofits do a real-time, lighter assessment process as they engage their community and make improvements and adjustments along the way.   Some describe this as   “try it and fix it.”   It might seem like changing a flat tire while the car is still moving but for many Networked Nonprofits it is a secret to their success.

I remember thinking to  myself at the time, well if one networked nonprofit can do this, couldn’t a network of networked nonprofits use real-time collaborative benchmarking data sharing for learning?   No one knew what I was talking about.  I’m not sure I did, but fast forward two years and it is here.

My colleague, Devon Smith, a self-described data nerd who loves benchmarking pointed out this glorious example from the museum world from Sean Redmond, a Web developer at the Guggenheim.

His complete benchmarking list of several hundred museums compares Twitter followers, Facebook Fans, and Klout Score.    There is also a top 50 performance list.    The project was inspired by Jim Richardson’s awesome spreadsheet but “tricked out with automated data collection social media APIs.”    Sean also blogs  about social media best practices and standards in social media for museums based on the weekly data.

The original data set was collected in a google spreadsheet,  using a manual and  time consuming data collection process.   To avoid data error and ease the data collection chore,  Sean created an automated version with data collected automatically via Twitter and Klout APIs.  As Sean points out, Clay Shirky called this  is a collaboration problem, the kind of problem that the internet is better at solving than are classical institutions.

With data collection drudgery out of the way, it transforms the benchmarking process into a vehicle for networked learning in real time or learning in public.   I didn’t understand it until I read the conversation on Twitter.

Imagine a community of practice of nonprofit organizations working together to learn and improve their tactical execution on social media channels – would something like this inspire group motivation?  Certainly, some subtle peer pressure to do your homework.

I shared a link to the site on my Facebook Page with a question about its usefulness (perhaps minus one of the bogus metrics).   My long time colleague, Jeff Gates, an artist and museum professional said:

I wonder about this emphasis on number of followers, change in percentage of followers of this list rather than focusing on strategies for engagement. Number of followers has become a benchmark stat like the number of hits on our web pages used to be. And it’s not very useful.

If this leads to a discussion about strategies for success (and what success is) that’s fine. But, by itself, it’s not of great value to me.

I asked Sean on Twitter why they didn’t collect metrics like membership numbers or visitors to the physical building?   Jokingly, he said, “There isn’t an API for that.”   But followed up with “Picking low-hanging fruit lets better fruit fall.”   Does tracking metrics like number of followers as part of a grid of well selected metrics without creating meaningless cause and effect assumptions help us improve practice and document results?  Here’s a sneak peek at benchmarking spreadsheet that is doing this for an integrated campaign.

Documenting value and finding meaning  are important commodities in new systems of engagement.   But in the real life of many institutions, there is a propensity for senior managers to latch onto stats like number of followers on Facebook.   Do you collect metrics to impress your boss or ones that truly help you improve?

What would be the valid benchmarking list for philanthropy?   What if the glass pockets data benchmarked a transparency metric?

Are you benchmarking your organization’s social media or integrated communications campaigns against peer organizations?  Have you conducted a benchmarking study of best practices?    Let me know in the comments!

 

Update:  Aquariums and Zoos Benchmark
https://plus.google.com/113352607123727266547/posts/gSwE8KNwatE

11 Responses

  1. Devon Smith says:

    I think of collaborative data sharing/peer benchmarking much in the same way as institutional data collection & analysis: you’re looking for outliers–bright spots or dark spots that help point you in the direction of where you should be spending your time digging deeper into the data. Data collection has to come before data analysis, this list is a great example of *collection.*

    Just as Sean showed, a big jump in fans or followers can be a good opportunity to start a conversation between peer institutions about what techniques they’re using.

    There are always going to be data points that we want to collect but can’t…yet. Sean’s website is a great model of scaffolding (something he references in the ‘better fruit falling’ tweet). It has the potential to put the social media teams at these institutions in touch with each other, and gives them a foundation to build on top of, when future opportunities for data sharing come along. Inevitably there will be new & better APIs that collect more of the information we want (and then of course we’ll want new and different data). You have to start somewhere, and this automated collection of *data that is easy to collect* is a huge leap forward over a Google spreadsheet.

  2. Interesting debate…I agree with Devon Smith’s comment!

  3. Joitske says:

    I met someone here working on benchmarking for learning and called it ‘benchlearning’ – great term!

  4. The API for accessing web-based ‘visitors’ (as I interpreted from the original tweet, but Beth mentions ‘visitors to the physical building’) is generally accessible via an API. However, it is considered *private* information and is only available to the account holder.

    Now, that gave me the thought that a resource for vendors (e.g., Twitter or Facebook) would be to categorize our various non-profit accounts, and publish that data without the identifying information–just category summary with number of members included.

    I believe some of this data is available in some very expensive reports from places like Forrester Research.

    Pew Internet has an interesting report on Social Networking Sites (http://www.pewinternet.org/Reports/2011/Technology-and-social-networks.aspx). Not exactly what we were looking for here, but a start.

  5. Beth says:

    Daniel: Can you share more specifics re: visitors for web-based API?

    Wish that insights data from FB was available via API …

  6. Beth: Did you mean ‘Dan’? :-)

    The Facebook Insight data is available via their API (http://developers.facebook.com/docs/reference/api/#analytics). That link’s page describes how it applies to ‘Apps’, but it also works for Pages and Domains.

    I’m in the process of writing an in-house dashboard which will include retrieval of this data to make on-going tracking & analysis easier.

  7. Beth says:

    Wow, C. Daniel Chase.

    I need to interview you – I’m co-authoring a book about social media measurement with KD Paine and would love to interview your as part of the book. How can get in touch with you? My email is here:
    http://www.bethkanter.org/contact/

  8. Beth says:

    Closing the loop: C. Daniel Chase wrote this wonderful guest post on dashboards
    http://www.bethkanter.org/dashboard-tips/

  9. Beth says:

    On Twitter, Tony Brown let me know about the Zoo/Aquarium spreadsheet for benchmarks: http://twitter.com/#!/anthonybrown/status/100565409102692352

    Leading me to his google + post:
    https://plus.google.com/113352607123727266547/posts/gSwE8KNwatE

    I have a few questions that I hope Anthony will answer in the comments or perhaps consider a guest post:

    1.) Are you gathering data manually or is it leveraging the API?

    2.) How are you using the data?

    3.) Is there conversation within the group on Twitter or elsewhere about how to use the data to improve?

  10. Answers:

    1) Yes, manual data gathering (twitter.com, klout plug-in, klout.com, twitter iPhone app , 2011 AZA Member Directory).

    2) At this point, my goal is to share with other zoo/aquarium (and social media pros) people to better the understanding of where we are as an industry on twitter. This helps create a reference point for new/improved twitter usage across the industry (ie: how often are we tweeting? Is this in-line with our peers? Who are te most active/followed/etc. From there, hopefully we can begin te dialog of “best practices”.

    3) Conversation has been low at this point – but the sheet was just “shared” publicly lastnight.

  11. Beth says:

    Anthony, I’m going to do a quick post.

Leave a Reply