Driving continuous improvement: lessons from funding social tech

There is one question that dominates our work at Shift and has done for the last 8 years: how do our products and services get better?

Depending on what point of development our product teams are at, they’re always asking a version of that question.

  • How to we take a new idea through to a proven solution?
  • How do we go from proving impact in one context to delivering impact consistently and at scale?
  • How does an established service leverage everything it has learnt over five years to radically improve its impact?

However, despite these strategic priorities, the plans, budgets and activities of our teams only partly reflect these objectives. Most of the time, longer-term development objectives are heavily diluted by short-term priorities and deliverables – and sometimes totally swamped by them. This is particularly problematic for our social technology projects which normally require multiple rounds of design and development before they start to unlock their potential for scalable, cost efficient impact.

And we we weren’t alone in thinking this.

Earlier this year, as part of our social tech innovation partnership with Nominet Trust, we brought together six further partners – Nominet Trust, Big Lottery Fund, Comic Relief, Esmee Fairbairn, Paul Hamlyn Foundation, Guys and St Thomas’ Charity and CAST – to explore how grant funding could better support the development of social technology.

After consulting with over 100 social organisations and funders, Shift has developed a new model that aims to drive products and services towards reaching their full potential. This model takes best practice from within and beyond the social sector and proposes a series of common phases and milestones of development and investment, as well as indicators and methods that help track and deliver improvement.

The model has been designed around the needs of social technology development but, as the project progressed, it became clear that the issues it aims to address are common to a much wider range of products, services and programmes.

Launch event

If you want to find out more, you are welcome to join us at our upcoming event.

Place: Central London location, TBA
Date: Thursday 2nd November, 4.30 – 6.30pm
RSVP: Via Eventbrite

Learning from our experience

One of our ventures, Historypin, which aims to build local social capital through collaborative storytelling, established a pretty good beta product in 2012 through innovation partnerships with Google and the Nominet Trust. The app quickly had 500,000 downloads, the community was growing and evaluation of the impact in local areas was positive.

However, at that point, Historypin was a long distance from being a great product. What it needed was the resources to focus on continuous improvement, to work through multiple cycles of design over several years to deepen impact, respond to user needs, develop sustainable revenue streams and to build a team capable of delivering at scale. What it managed to accumulate, however, was 60 different deliverables from 40 partnerships over 3 years, 90% of which were focused on short-term outcomes. As a product, Historypin spun its wheels. About 10% of resources were going into improvement and 90% into managing and delivering partnerships that did not share (in anything but spirit) those improvement goals.

Since 2016, we have been able to unlock Historypin from this cycle and, through some invaluable strategic grant funding from BLF, the team have used all of that experience to design a significantly improved product.

However, for Historypin and for all of our other teams, the challenge of finding the right resources and partnerships to move forward towards long-term development objectives remains paramount and permanent. It has also become clear, as an active member of a wide network of social organisations, that Historypin’s wilderness years are very, very common across the sector, particularly for social technology projects.

Questions and focus

The research and consultation project set out to explore how grant funding partnerships could be better designed to support more social technology projects through to delivering sustainable impact at scale.

Why grant funding?

Grant funding was a priority because it clearly plays a crucial role in supporting the improvement of impact-focused services, products and programmes: it cares deeply and exclusively about impact; it supports every stage of the journey from a new concept to an established solution; it can embrace higher risks than other capital; and it is mostly managed by institutions that have been and will be around for a long-time

Why social technology?

Social technology is a focus because questions related to innovation and improvement are particularly pertinent to tech for good projects. Firstly, technology projects tend to spend longer in no/low impact development phases before, potentially, growing to deliver more cost effective impact at scale. This requires the kind of funding and support that values improvement and progress towards long-term objectives. Secondly, there is an obvious and essential role for iterative, test-and-learn design cycles within technology development. These requirements combine to make the current funding landscape especially problematic for social technology.

Defining improvement

We haven’t ignored the challenge of clearly defining what we mean by terms like continuous improvement, innovation, development and, even more obliquely, phrases like “progress towards long-term goals” and “getting better”. In fact, the lack of shared language is at the heart of the rationale for the project, as it reflects the complexities of identifying, planning, delivering and measuring improvement.

So what are we referring to?

We’re referring to the process of increasing the potential of a service, product or programme to deliver greater impact.

During early innovation stages, this would consist of very rapid increases in potential (from none to some) and, when a solution is proven and established, of regular or periodic optimisation of potential (from some to more).

Our experience on Historypin is a useful way of bringing this to life.

Between 2012 and 2015, the product delivered some impact in every one of those 60 projects, it generated millions in revenue, it built a community of users and its team grew in numbers significantly. But (and it’s a crucial but) very little was done to increase its potential to deliver greater impact, other than growth. That product was fairly good in 2012 and still fairly good in 2015. In all of the various dimensions of fundamental progress, it remained fairly static:

  • The core technology and user experience remained pretty much the same (the solution)
  • The people managing Historypin increased but not in capacity to deliver a brilliant solution at scale (the team)
  • The scale of the activity and therefore overall impact grew, but the depth of the impact didn’t increase across projects (its social value)
  • The degree of user satisfaction and retention remained average (its user value)
  • The business model continued to be pretty basic and the potential to scale revenue streams didn’t significantly increase (its financial value)

As we lay out some of our findings from this collaborative project – and certainly when we launch our recommendations in early November – we’ll return often to this question of definitions. Just as it is clear that balancing improvement and delivery represents a challenge, so does the means of identifying and defining these two distinct but intertwined concepts.

Initial insights

So far, we have explored and analysed dozens of existing models and practices and interviewed over 100 charities, social enterprises, Trusts and Foundations, philanthropists, impact investors, commercial companies, accelerators, incubators and other experts. Between now and early November, we will speak to many more, as well as bring our research together into a concise set of findings and recommendations.

In the meantime, here are some of the significant insights that come out of this work so far:

1. The focus on grant funding is right

We see grant funders as representing the biggest opportunity for investment in improvement due to their focus on impact, experience and stability of the institutions in the sector, and the breadth of where and what they will fund. This hypothesis received a sense check within the research and we feel confident the emphasis is right.

This doesn’t mean, however, that we want to ignore the ecosystem within which grant funders operate. There are a number of alternative means of funding which also have a role to play in driving improvement, if arguably to a lesser extent that grant funding.

Impact investment

Of all of the available capital for the development and delivery of impact focused products, services and programmes, impact investment is clearly the most well aligned with improvement. Equity and debt is, by its nature, future focused. It looks for opportunities to significantly increase the potential of products and services to deliver greater social and financial value.

So, why isn’t the focus of the project on impact investment?

Firstly, the existence of fairly large pools impact investment in the UK, the process of securing it and its effect as it is deployed all combined to encourage, demand and support improvement. So, in some ways, we don’t need to ask questions of impact investment when it comes to driving improvement.

Secondly, while impact investment plays this role where it invests, it is only active a) after products and services have been proven in their potential to have impact and generate impact (mainly) and b) within certain areas of the sector, for certain types of solution and business model. While the latter is inevitable, the former is an important area for investigation – how can impact investment exert its positive influence earlier upstream?

There seem to be two questions to explore here:

  1. How can impact investment embrace the risks of earlier investment? There is currently very little evidence that impact investment can generate the kinds of returns from future exits that would justify riskier and earlier investment. However, a clearer picture of the scope of impact investment will emerge over the next few years, as the sector demonstrates the potential (or not) of significant returns and as pioneers like ClearlySo, MustardSeed and, in particular, Zinc invest further upstream.
  2. How can impact investment help encourage the rigorous drive for improvement further upstream? In this area, this research may be able to make a modest contribution, adding to the significant progress of organisations such as the Access Foundation, because part of the answer lies in how more grant funding can be leveraged to drive improvement.
Commissioning

Commissioning by local and national government appears to represent significantly less potential. It is, by its nature, very focused on short-term delivery and immediate impact (or, in the case of mechanisms like impact bonds, short-term delivery and longer-term impact). Therefore, its ability to invest in and value the increase in potential is limited. After a decade of cuts, concepts such as payment-by-results and single year cashable savings, seem far more dominant than anything resembling continuous improvement of services.

However, this superficial view does an injustice to commissioners, who clearly support improvement in several ways, including:

  • Supporting innovation: either on their own or in partnership with grant funders, local and national commissioning bodies invest regularly and effectively in new approaches
  • Embracing service design: as the growing portfolio of clients at organisations like FutureGov and the Behavioural Insight Team demonstrates, there is an appetite within government services for improvement beyond just financial efficiencies.
  • Long-term partnerships: commissioners have, in many ways, the most to gain from investing in the improvement of services alongside their delivery, as their partners become better and better value and many commissioners some design long-term relationships with this agenda

Again, commissioners, like impact investors, have a crucial role to play, directly in the way they commission and indirectly in how they collaborate with grant funders and impact investors. We continue to feel that, where there is overlap between commissioning and this work, it is in the relationship with grant funding and this remains the focus of the project.

2.  The issues aren’t exclusively relevant to social technology

Social technology is a great case study on the role and requirements of continuous improvement. However, questions about the relationship between grant funding and the needs of improvement and innovation very quickly lead to issues beyond social technology.

Ultimately, every social organisation we interviewed and every product, service or programme that we discussed had at least partly unfulfilled ambitions for improvement. Organisations spending north of £100million pounds on dozens of no/low tech services felt, just as keenly as social tech start-ups, that the progress of their projects towards their desired destination was slowed, strangled or just plain abandoned because they couldn’t get the resources or build the capacity they needed.

Equally, every Trust and Foundation, whether they funded social technology projects or not, could point to investment in learning and improvement and strategic ambitions to help products, services and and programmes make progress toward long-term development objectives. They also all described the constraints on capacity – both the capacity at their institutions for non-financial support and the capacity at social organisations for continuous improvement.

The questions contained in this research clearly aren’t relevant to all funded projects (e.g. those defined by objectives to exist for a short period of time and, for the most part, capital build projects). They are, however, relevant beyond social technology and, as a result, we have both explored the challenges specific to social tech development as well as expanding the breadth of organisations have consulted and the scope of the recommendations.

3. Helping to find a common language, standards and models for funding partnerships

It has been our experience over the last 15 years that, despite the experience, intelligence and ambition for it to be otherwise within funders and social organisations, a part of every grant funding relationships tends to resemble, (in the words of one astute grants manager we interviewed) a slightly silly dance.

The worst version of this silly dance, tends to go as follows…

First, the dance is planned.

Trusts and Foundations develop criteria for a programme that focus on a combination of theme (e.g. youth employment), type (e.g. technology) and stage of development (e.g. innovation) and outputs (e.g. a pilot) and outcomes (e.g. proven impact on a certain number of beneficiaries).

The invite goes out.

Social organisations that want funding to carry on doing what they’re already doing sit round tables and think about how their existing work can be framed as innovative social tech that will increase youth employment and that they can deliver a pilot with 100 participants with £20,000.

Commence the dance.

The first stage of humiliation for charities involves shouting “we’re new, we’re new!” (or “we’ve nailed it, we’ve nailed it!”, depending on the criteria) the loudest in an awful dance-off.

And things only get worse, as the gap between “What We Promised” and “What We’re Delivering” opens up into a chasm that only a frenzied Smoke and Mirrors dance, that gradually engulfs the whole organisation, can close.

Most versions of the dance are a bit less extreme, but equally unproductive.

The Hide and Seek – social organisations present funders with the information that they think they are looking, while funders desperately try and find out what is actually going on and assess risk based on what they can read between the lines.

The Beneficiaries Multiplier – grants officers come back from meetings with funding committees with “doubts about value for money” and the social organisation instantly agrees to tripling the number of beneficiaries

The Sustainability Nonsense – funders ask social organisations for 250 words on how the service will be sustained after funding, to which both the funder and the social organisation knows the answer is “come back to you and / or another grant funder for more money”, but all pretend that “transition to sustainable revenue via a made up business model” should go in the box instead.

All of this points to one thing: more common terms, standards and models.

In the case of improvement, we have found that there is a strong consensus, amongst funders and social organisations, that funded products, services and programmes can and should improve at a faster rate than they currently do. But in the way of this are several significant barriers that will take time, discussion and trial and error to overcome. We feel that the most useful contribution that we can make is to the terms, standards and models that sit between funders and social organisations, as well as between funders and other funders, investors and commissioners.

4. A means of measuring improvement

As we’ve suggested, the challenge does not appear to be in finding agreement on the problem. We can’t find anyone that likes the idea of services remaining in the same state for years and years, delivering a fraction of the impact that it could. Nor anyone that thinks the wilderness between early stage social tech innovation and proven, sustainable impact is a good way of working. Nor are there many people that deny these issues exist (although the degree varies, of course).

It is also clear that there isn’t one way of making progress in this area, but many. Here are just some of the existing practices and new ideas that have surfaced during the project so far:

  • Social organisations can clearly articulate the stage of development of their products, services and programmes, as well as their plans for making progress
  • Social organisations can access the skills, resources and time that they need to plan and deliver improvement, particularly in areas where they are traditionally less well equipped, such as service design and business planning
  • The definition of what is regarded as “innovative” can broaden to value enhancements to existing services as highly as entirely new services (if not more so)
  • Evaluation to prove impact can become complemented (or replaced by) evaluation to drive improvement
  • Funders can encourage and invest in improvement objectives for every project they fund, whether those improvements are defining (e.g. from nothing to something) or complementary (e.g. optimisation of delivery)
  • Funders can gradually take their funding committees and trustees on a journey towards valuing improvement as highly as direct, short-term impact
  • Funders and social organisations can work together to set delivery objectives that match the stage of development of the project
  • Funders can take a more deliberate and accurate approach to funding at particular stages of development and use this to design more appropriate financial and non-financial support
  • Funders can collaborate more with other funders, as well as commissioners and impact investors, to build a landscape of investment that increases the chances of ongoing improvement

It has become clear to us that a component of pretty much all of these potential solutions is measurement.

Ultimately, we have to be able to make improvement as tangible and measurable an outcome as short-term impact on beneficiaries. We have to be able to diagnose current levels of development, plan progress, track improvement while it is happening and track the impact of improvement after it has happened. We have to be able to move improvement from a fuzzy, background concept to a concrete priority.

None of this is straightforward, but there is heaps of precedent and best practice to draw on. The Dartington Social Research Unit and Nesta’s work on Standards of Evidence are used widely by funders and social organisations. RIBA’s plan of work stages help those delivering and investing in capital build projects diagnose, plan and deliver progress towards completion. Tech start up models like the Path Forward are regularly applied to social tech development. And many Trusts and Foundations use their own models and methods to identify current stages of development of a project and assess proposals in this light, as well as design funder-plus style support to ensure improvements. Even more widely, the question of organisational capacity is a very well established and proven model for funding. While this is different (often markedly so) from the improvement of services, products and programmes, it reflects a shared commitment to increasing potential.

We look forward to continuing to share what we have found and our suggestions, based on this research, for how to grant funding can best drive improvement, helping products and services get better with every grant.