This where you attempt to take the information you collect in step two and apply it to the objectives you set in step one. This is where a good software solution can really come in handy. Data Evaluation : Lastly, be sure to evaluate your outcomes management systems regularly—we recommend on an annual basis—to make sure they are still serving the needs of both the organization and any funders on the particular program or project. When evaluating systems, ensure they will scale as your services and needs grow.
How to Measure Nonprofit Outcomes of a Program with Software Having the right software tools in place can significantly ease the process of outcomes reporting. Capabilities include: Checkmark Tracking and analysis of demographic data of program participants Checkmark Referral management Checkmark Securely storing participant needs, progress assessment and history information Checkmark Attendance monitoring Checkmark Identification and tracking of key trends Checkmark Monitoring and assessment of program and staff effectiveness Checkmark Determining which staff, services, programs and efforts are the most effective at achieving desired outcomes Checkmark Reporting capabilities that allow for the reporting of multi-funder obligations in mere minutes Conclusion Learning how to measure the outcomes of a program is an invaluable skill for any social service administrator.
Don't want to miss out? Subscribe to the newsletter. We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits.
Do not sell my personal information. Manage consent. Close Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website.
Health systems target outcome measures based on state and federal government mandates, accreditation requirements, and financial incentives. Although healthcare outcomes and targets are defined at the national level, health systems might set more aggressive targets.
Reporting and accreditation entities have processes in place to normalize outcomes data to account for context, which is key when it comes to reporting. Using fall rates as an example, if a small, bed hospital sees 10 patients in one month and one patient falls, then their fall rate is high 10 percent. CMS uses outcome measures to calculate overall hospital quality.
In a report, CMS explained how it arrived at its hospital star ratings. CMS grouped outcome measures into seven categories weighted by importance:. There are hundreds of outcome measures, ranging from changes in blood pressure in patients with hypertension to patient-reported outcome measures PROMs.
The seven groupings of outcome measures CMS uses to calculate hospital quality are some of the most common in healthcare:. Mortality is an essential population health outcome measure. Safety of care outcome measures pertain to medical mistakes. Skin breakdown and hospital-acquired infections HAIs are common safety of care outcome measures:.
Readmission following hospitalization is a common outcome measure. Readmission is costly and often preventable. UTMB reduced their hospital readmission rate by implementing several care coordination programs and leveraging their analytics platform and advanced analytics applications to improve the accuracy and timeliness of data for informing decision making and monitoring performance. Patient-reported outcome measures PROMs fall within the patient experience outcome measure category.
This information can provide a more realistic gauge of patient satisfaction as well as real-time information for local service improvement and to enable a more rapid response to identified issues. Patient experience may also be used as a balance metric for improvement work. For example, a care delivery process may decrease the LOS, which can be a positive outcome, but result in a decreased patient satisfaction score if patients instead feel they are being pushed out.
Given the rapid changes that occur within healthcare, making sure best practice care guidelines are current is critical for achieving the best care outcomes. Failing to adhere to evidence-based care guidelines can have negative consequences for patients. Timeliness of care outcome measures assess patient access to care. The outcome funder needs to consider which is most appropriate for the characteristics of the cohort and this is highlighted below.
Works best with a homogenous cohort. When all members of the cohort experience the same or similar adverse outcomes the outcome target can be set at an individual level, e.
A comparison group is not required. As the individual is assessed and this directly relates to the outcomes there is no need to set up a comparison group. Changed or improved outcomes can be specified on a rate card that shows the payment amounts per individual when an outcome is achieved. If there is no comparison group, the outcomes achieved are likely to include some that would have happened anyway, without the intervention.
This can only be avoided if the outcome payer has a reliable way to estimate what would have happened anyway, known as the deadweight. The payment level should be adjusted downwards accordingly. Works best if adverse outcomes vary considerably. It is harder to set standard measures of success at the individual level if the cohort is made up of different individuals, e.
Usually requires a comparison group — this is needed so the outcomes achieved through the contracted intervention can be compared against a group up did not receive a service.
This is known as a counterfactual and is there to check that the outcome would not have happened without the intervention. In an outcomes-based contract, this can be used to assess whether payments should be made. You can find out more about the methodology used to measure the impact of the project here. This was done through a rate card whereby the price was set for each outcome.
You can read more about rate cards in the pricing outcomes guide. This set out the costs of delivering specific outcomes, and can be used by government and funders to make informed policy and spending decisions. This is seen as a way to streamline and make more effective the process of paying for education outcomes.
This chapter sets out a checklist for setting outcome metrics, before going into greater detail about the considerations that need to be made. Please note that this should not be viewed in isolation as when outcomes are set for an outcomes-based contract, there will need to be decisions made about pricing outcomes, and how you evaluate the project.
Where an outcome funder is uncertain whether the performance level they are seeking is reasonable and attainable, they should test their thinking with service providers and investors, if it is an impact bond contract. There may be a question as to whether there should be stretch targets for providers under a contract. This is only advisable if they can be achieved.
If they are uncertain whether a stretch target is achievable, they can test their thinking with providers and investors during the development and implementation stages. In a contract that incentivises skills attainment by young people, there may be a metric and payment relating to:. In a contract that rewards the achievement of employment there may be similar metrics relating to the person:.
This applies especially to outcomes-based contracts where outcomes form all or part of the payment. It could be argued that ensuring the outcomes framework is acceptable to all parties is the single most important consideration. If the outcome payer is prepared to pay for the achievement of an outcome, and providers and investors are comfortable to have success measured by the achievement of the same outcome, a successful contract is likely.
Conversely, if one or more parties are not happy with the proposed outcome measure or the performance level required to achieve payment, it will be very difficult to conclude a contract successfully. Outcome payers should consult other parties while designing the outcomes specifications and should do so prior to any competitive process where a public contract is to be awarded.
This will judge how acceptable the proposed framework is. Outcome metrics and targets work best when returns to investors and outcome funders, and respective incentives, are aligned. It is noted that there can be a tension between using a robust model and using a less robust model that is aligned with measures used by others in the sector.
When agreeing outcome metrics, it is important to identify ways to avoid or mitigate perverse incentives. It is not only outcomes-based contracts that are prone to these although they may be especially vulnerable — to some extent, any effort to measure outcomes is at risk of this.
Perverse incentives encourage contract stakeholders to behave in a way that is detrimental to contractual goals even if some outcome metrics improve. The result of perverse incentives can be negative consequences for service users. They can occur for all parties of a contract, not just providers or investors. There are two main forms of perverse incentives, these will be discussed below with examples and ways to mitigate them.
Sometimes providers, investors or intermediaries may select beneficiaries that are more likely to achieve the expected outcomes and leave outside the cohort the most challenging cases. An example of this is when an outcome is chosen that seeks to achieve an absence or reduction of referrals to a statutory agency. The perverse incentive is to simply not refer people even if they fit the criteria for statutory support. This will make it look like there has been a reduction even though it is not in the best interests of the beneficiaries.
This may occur when an outcome target does not consider the varying levels across the cohort. The cohort may be made up of people who have very different needs. To mitigate this the outcome metric could be structured so that the start point of the individual is recognised and they are rewarded for the progress made. This will mean that variation in the cohort is accounted for.
For example, a homelessness project may seek to settle 20 many homeless people in 12 months. To mitigate this, the target should reward success at small and regular intervals, or offer bonus payments. The amount of homeless people settled could be measured at 12 months, 18 months and 24 months and bonus payments could be set to encourage people to settle in accommodation.
You will need to consider how you will mitigate against perverse incentives so that the outcomes are in the best interests of the beneficiaries. If you refer to the Designing a Robust Outcomes framework and make sure you do the three key points well, you should avoid perverse incentives. When setting outcome measures it is fundamental to avoid measures that can be unduly influenced by factors outside the control of the provider organisation and other key stakeholders.
It may be impossible to identify social outcomes that are entirely impervious to wider changes in social policy, or changes in demographics or labour market change. However, it is wise to avoid outcomes that can easily be under or over achieved due to external factors.
These are:. Whilst these external factors may not be under control, it is important to determine the potential impact they could have when designing an outcomes-based programme. It is wise to take into account the likely influencing factors in the metrics, they could be both positive or negative.
It is also wise to have agreements with other practitioners so they all operate in a way that enables the outcomes to be achieved. In some circumstances, outcome funders may accept obligations under the contract to manage the impact of these factors, but this is often not the case.
It may be preferable to include the monitoring and reporting of these impacts as part of the contract management process, with the expectation that the commissioner will take steps to mitigate or avoid where possible. If you look at our case studies you will find examples of outcomes measures and targets from impact bond projects developed across the world to tackle a range of social issues, from youth unemployment, to lack of access to education and high quality health services and facilities, homelessness, and poverty.
Having understood the considerations that have been outlined in this guide you are equipped to write a robust outcomes framework. As part of the Collective Impact Bond Steering Group process, WLZ worked with key stakeholders in the area to analyse some of the challenges that children and young people growing up in the are face. The goal was to understand where to focus efforts, and what outcomes we should be aiming to achieve over the long-term.
Many areas were identified at each age stage. Because of the inherent complexity of the problems they are seeking to address, collective impact initiatives tend to try to achieve multiple outcomes, that are in turn underpinned by many indicators. However, as payment is directly linked to outcomes in the WLZ funding model, outcomes had to be identified that can be reliably measured, monitored and attributable to the programme.
In practice this required in the early stages of the project ongoing reviewing and refining of the outcomes framework.
A previous version of the framework included eight outcome areas across three domains - wellbeing, learning and character. While WLZ tracks their impact across a broad set of long-term outcomes, payment for outcomes is linked to specific measures across the two years of support provided to each child on the programme, as outlined in the diagram below.
You can view more examples of outcomes frameworks on the case studies as part of our website. There is a wealth of resources available for practitioners interested in developing and implementing outcomes-based approaches. Social Finance Technical Guide: Designing outcome metrics.
We will regularly review and update our material in response. Please email your feedback to golab bsg. Gone are the days of reporting only our outputs — how many people served, how many hours of training, or how many donors we have. Effective impact measurements require a different way of thinking and planning for most nonprofits, which means an organization-wide culture change. Rather than focusing on how many people served, it means measuring, managing, and communicating the impact on a client as a result of participating in one of our programs: what changes in their lives, not what we provided in the way of services.
Outcomes are often defined in three different categories, which can be helpful in building your dashboard:.
0コメント