unct performance indicators for gender equality … unct performance indicators for gender equality...

41
1 UNCT Performance Indicators for Gender Equality and the Empowerment of Women Desk Review 2012-2014 18 June 2015

Upload: vohuong

Post on 10-May-2018

234 views

Category:

Documents


0 download

TRANSCRIPT

1

UNCT Performance Indicators for Gender Equality and the

Empowerment of Women

Desk Review 2012-2014 18 June 2015

2

UNCT Performance Indicators for Gender Equality and the Empowerment of Women Desk Review 2012-2014 Prepared for: UNDG Gender Equality Task Team

Chaired by UN Women Prepared by: Tony Beck (Consultant) Managed by: Michele Ribotta (UN Women)

3

ACRONYMS & ABBREVIATIONS

AP Asia Pacific AS Arab States/North Africa CAP Consolidated Appeals Process CCA Common Country Assessment CEB Chief Executives Board for Coordination CEDAW Convention on the Elimination of All Forms of Discrimination

against Women CHAP Common Humanitarian Action Plan DaO Delivering as One ECA Europe and Central Asia ESA East and Southern Africa GEEW Gender Equality and the Empowerment of Women GTG Gender Theme Group IANWGE Inter-Agency Network on Women and Gender Equality IASC Inter-Agency Standing Committee ILO International Labour Organization NGO Non-governmental organization LAC Americas and the Caribbean MDGs Millennium Development Goals OCHA United Nations Office for Coordination of Humanitarian Affairs OPT Occupied Palestinian Territories QCPR Quadrennial Comprehensive Policy Review of Operational

Activities for Development RC Resident Coordinator RCO Resident Coordinator Office Scorecard UN Country Team Performance Indicators for Gender Equality

and the Empowerment of Women SDGs Sustainable Development Goals SGBV Sexual and Gender-based Violence UN United Nations UNCT United Nations Country Team UNDAF United Nations Development Assistance Framework UNDG United Nations Development Group UNDP United Nations Development Programme UNEG United Nations Evaluation Group UN-SWAP UN System-wide Action Plan on Gender Equality and the

Empowerment of Women UN Women United Nations Entity for Gender Equality and the Empowerment

of Women WCA West and Central Africa

i

Executive Summary

1. Background The Quadrennial Comprehensive Policy Review of Operational Activities for Development (QCPR) calls for the UN development system to expand and strengthen its work on gender equality and the empowerment of women including through the use of the UN Country Team Performance Indicators for Gender Equality and the Empowerment of Women (the “Scorecard”) as a planning and reporting tool for assessing the effectiveness of gender mainstreaming by UN Country Teams (UNCTs).1 The review of the MDGs/SDGs and post-2015 framework as well as the UN ‘fit for purpose’ discussions also reinforce the need to tackle gender inequality, including its underlying causes, more holistically and with expertise cutting across UN entities; and also focuses on ensuring the UN works together better to deliver results, evidence based analysis, and stronger monitoring and evaluation. Standard Operating Procedure Guidance also makes specific note of the Scorecard in its monitoring and evaluation framework. The discussions taking place concerning the post-2015 SDG stand-alone goal on gender equality and the empowerment of women means that new impetus is needed for mainstreaming gender, including accountability tools such as the Scorecard and the UN-System Wide Action Plan (UN-SWAP). This context suggests that the Scorecard is now more important in UN programming than when it was designed and should be reinvigorated.

For these reasons the UN Development Group (UNDG) Gender Equality Task Team, led by UN Women, decided to undertake a review of the Scorecard covering the period 2012-2014, building on an earlier review of performance from 2008-2011. The objective of this review is to:

• Analyze trends in UNCTs’ performance on gender since the 2008-2011 review.

• Assess strengths and weaknesses of UNCTs vis-à-vis the Scorecard Performance Indicators.

• Make recommendations as to how identified strengths can be built on and how gaps can be filled.

• Identify good practices in the strategic use of Scorecards by UNCTs, and linkages with common strategic planning at the country level.

• Review the Scorecard content and process and determine how this can be improved.

2. Methodology Nineteen Scorecards are included in this review, as opposed to 20 for the 2008-2011 period. This review also included interviews with six regional UN Women staff, and 20 others involved with the Scorecard - country office UN Women and Resident Coordinator Office staff, staff in the UN System Coordination Division of UN Women, and consultants who have implemented the Scorecard.

1 A/RES/67/226 para 83.

ii

3. Quantitative findings Ten out of 56 UNDAF roll out countries between 2012 and 2014 completed the Scorecard, or 18 per cent, and a further nine countries completed the exercise from other roll-out years. There has been improvement between 2008-2011 and 2012-2104 in the majority of Scorecard areas, including programming, decision-making, partnerships, quality control, and accountability; but limited or no progress related to UNCT capacities, budgeting and monitoring and evaluation. There is also considerable good practice in the UN system. For example, 13 per cent of ratings for the Scorecard area on the CCA/UNDAF and 10 per cent of ratings for the area on programming exceeded minimum standards. Delivering as One (DaO) UNCTs have performed better than non-DaO UNCTs in five Scorecard areas – planning, programming, partnerships, UNCT capacities and budgeting, with monitoring and evaluation being roughly equivalent. This suggests that the DaO process has supported gender mainstreaming within UNCTs. Strengths in performance vis-à-vis the Scorecard indicators The most notable strength is the improvement related to the UNDAF. Between 2012 and 2014 UNCTs as a whole came close to meeting the following Scorecard indicators:

• One UNDAF outcome clearly articulates how gender equality will be promoted.

• One third to one half of UNDAF outputs clearly articulate tangible changes for rights holders and duty bearers, which will lead to improved gender equality.

The formulation of these gender sensitive results statements is also supported by greater involvement from UN partners in UNDAF planning, particularly from national women’s machineries and excluded women, although less so from women’s NGOs and networks. Given that under new planning for DaO UNCTs each UNDAF will be supported by a results group, that most new UNDAFs include gender related outcomes and outputs suggests that future performance on gender will be improved. Other strengths are the inclusion of gender perspectives in joint programming and programmes; support to national priorities and gender mainstreaming in programme based approaches and aid effectiveness processes; and UNCT Heads of Agency meetings regularly taking up gender equality issues more often. Weaknesses in performance vis-à-vis the Scorecard indicators Review of the country context remains unsatisfactory in most cases. This is surprising given the attention to gender equality in UNDAF outcomes and outputs, and may be due to a disconnect between country analysis and UNDAF development. Capacity assessment and development declined between the two review periods, and is one of the worst performing indicators. This is an area that needs urgent attention. There has also been limited change in gender related budgeting, the most challenging of the Scorecard indicators. In sum, over the period 2008-2014, of the four key elements for promoting gender mainstreaming, there have been improvements in two – accountability and strategic planning – and reverses in two – capacity development and resource tracking and allocation.

iii

Regional analysis UNCTs in the Asia Pacific region rated highest or amongst highest in 16 out of 22 Scorecard indicators, and all of the eight Scorecard areas. Other regions evidenced similar levels of performance. For regions other than the Asia Pacific there was roughly similar performance in the areas of programming, partnerships, capacities, budgeting, but greater variation in decision-making and particularly monitoring and evaluation. No clear reasons for better performance in the Asia Pacific region could be determined from stakeholder interviews. 4. Qualitative findings Several respondents noted the value of the Scorecard rating system, which clearly sets out a globally agreed set of minimum standards. Other respondents noted, however, that the Scorecard is an inflexible tool that leaves limited room for interchange and dialogue about gender mainstreaming. Five out of 20 of respondents were satisfied with the current process, which had worked well with their UNCTs or regions. Twelve other respondents suggested two main changes, as follows:

• A more participatory approach involving UNCTs and partners earlier in the process. This was the view voiced by 65 per cent of regional and country level respondents.

• Implementing the Scorecard when the UNDAF cycle was close to completion, and at the beginning of the new UNDAF cycle.

Follow-up to recommendations Five out of 17 UNCTs that implemented the Scorecard in the period 2012-2014 developed a formal management response, mostly with the support of Gender Theme Groups. This report highlights examples of good practice. The quality of recommendations has improved between 2008-2011 and 2012-2014. The number of recommendations has decreased, and there is greater inclusion of the three key elements for follow-up – resources, timelines and budgets. However, in a number of cases recommendations are still being expressed too vaguely to be actionable items. Weak follow-up to recommendations in some cases may be a function of a general lack of accountability within UNDAF programming and implementation processes. Making the Scorecard process more participatory The review points to opportunities for revising the current Scorecard process with a view to address key challenges, such as:

• Encourage higher uptake by UNCTs, including in the context of UNDAF roll-out. Less than 30 per cent of UNCTs have implemented the Scorecard exercise since 2008, whereas the original intention was to have every UNCT do so

• Increasing ownership of the exercise by UNCT

• Ensure more systematic follow-up to recommendations UNCTs and consultants have taken the initiative to adapt the Scorecard methodology and introduced more participatory practices, as highlighted in Box 1 of this report. The UNDP Gender Equality Seal offers a more participatory model, some elements of which the Scorecard may wish to replicate. The Gender Equality Seal resembles the Scorecard

iv

in that it includes a set of minimum standards, but takes a participatory approach working with UNDP Country Offices over six to eight months to achieve these standards. Greater participation should lead to more buy-in to recommendations and follow-up. There is some evidence that this is the case, however, it was difficult to track this systematically as few UNCTs have rigorous follow-up methods. The disadvantages of a more participatory process include potentially losing the Scorecard accountability focus, and a potential need for a greater investment in resources in the Scorecard process. Adapting the Scorecard for humanitarian settings Respondents noted that the Scorecard could be adapted to be more relevant for humanitarian settings, and if it was adapted in this way would be more likely to be used. Current accountability mechanisms on gender equality and the empowerment of women The review compared four accountability and learning mechanisms: the Scorecard, the UN-SWAP, the ILO Participatory Gender Audit and the UNDP Gender Equality Seal. Given the existence of several accountability and learning mechanisms, the UNDG and Inter-Agency Network on Women and Gender Equality (IANWGE) will need to ensure their complementarity and promote understanding of their different purposes and uses in particular at country level. Both the UN-SWAP and the Scorecard, for example, measure performance on gender mainstreaming, while the two frameworks have different processes, focus, and audiences. Exploring ways of ensuring further complementarity will be important including for example archiving the Scorecard data in the UN-SWAP web-based tool, as well as exchanging lessons on successful implementation.

5. Recommendations Institutional arrangements within the UNDG

1) Reconfirm the commitment and shared responsibility of all members of the UNDG Gender Task Team for promoting and supporting greater uptake of the Scorecard by UNCTs.

2) Feature the Scorecard as a standing item in the work plan of the UNDG Gender Equality Task Team, specifying deliverables, division of labor and funding requirements.

3) Promote greater participation of other UNDG entities in Scorecard implementation

4) Clarify the roles and responsibilities of UN staff at HQ, regional and country levels, drawing on the analysis in Section 4.8.

5) Liaise with other UNDG Working Groups that are developing new accountability/performance assessment tools for UNCTs (e.g. the UNDG Human Rights Mechanism), to ensure consistency across tools and mechanisms, and avoid overburdening of UNCTs with inconsistent requirements.

6) Liaise with relevant UNDG WGs to ensure that UNDG programming guidance to UNCTs draws on the Scorecard tool and this global review. This would include recognizing the implementation of the Scorecard as a key input into UNDAF monitoring and evaluation by UNCTs, and suggesting that the Scorecard is carried out at least once towards the end of every UNDAF cycle.

v

Updating and upgrading the Scorecard tool 7) Revise and update the Scorecard performance areas and indicators to ensure

relevance and applicability by UNCTs, as well as consistency and alignment with the UN-SWAP. This would include reviewing and piloting the humanitarian indicators (see Annex 5).

8) Review the Scorecard methodology to make the process more participatory including by drawing from good practices from the field as well as from methodologies applied to other existing tools (e.g. the UNDP Gender Equality Seal).

9) Further explore complementarities and opportunities for aligning the Scorecard with other existing mechanisms, including the UN-SWAP.

10) Develop a standard template – and related guidance - for Scorecard recommendations and management response, to ensure timely implementation of Scorecard recommendations by UNCTs and related review.

Communication and knowledge management 11) Make the current Scorecard review report available to the UNDG and UNCTs. 12) Upload all Scorecard reports implemented over the period 2012 – to date on the

UNDG web site. 13) Update the Scorecard User’s guide available to UNCTs, including a short

description clarifying purpose and focus of different existing tools and accountability mechanisms currently in place.

14) Prepare a new communication package for RC/HCs reaffirming expectations of UNCTs to complete the Scorecard exercise, as confirmed in the QCPR, sharing relevant resources and support available.

15) Ensure learning and experience sharing for relevant UN staff around UNCTs performance assessment on gender equality, including through webinars with UN staff and experts who have supported UNCTs with the implementation of the Scorecard.

vi

TABLE OF CONTENTS

1. Background and purpose 1 2. Methodology 2

• Outline of data set and method of analysis

• Limitations 3. Quantitative findings 3 3.1 Strengths of the UN system at the country level 5 3.2 Weaknesses of the UN system at the country level 6 3.3 Regional analysis 7 3.4 Delivering as One UNCTs 9 3.5 Percentage of counties meeting minimum standards 9

in at least half of Scorecard areas 4. Qualitative findings 11 4.1 The Gender Scorecard process 11 4.1.1 Should the UNDG introduce a more participatory process? 12 4.2 Follow-up to recommendations 14 4.3 Knowledge sharing 15 4.4 The Gender Scorecard and other accountability mechanisms 16 4.5 The Gender Scorecard and the UN-SWAP 19 4.6 Humanitarian settings 20 4.7 Scorecard content and UNDG Guidance 20 4.8 Roles and responsibilities 21 5. Recommendations 22 Annexes

Annex 1: List of interviewees Annex 2: UNDP Gender Equality Seal Annex 3: Examples of good quality recommendations Annex 4: Overlap between the UN-SWAP and the Scorecard Annex 5: Draft Performance indicators for humanitarian, emergency,

recovery and post-conflict situations Tables and Boxes

Table 1: Scorecard uptake, 2012-2014 Table 2: Average Scorecard ratings for 39 UNCTs, 2008-2011, and 2012-2014 Table 3: Scorecard ratings by indicator Table 4: Scorecard ratings by region, area and indicator (number of Scorecards),

2008-2014 Table 5: Rating by area for Delivering as One UNCTs, 2008-2014 Table 6: Per cent of countries meeting Scorecard minimum standards in four areas Table 7: Quality of recommendations, 2008-2011 and 2012-2014 Table 8: Coherence and complementarity between four accountability and learning

gender mainstreaming mechanisms Box 1: Example of participatory practices introduced to Scorecard

implementation

1

1. BACKGROUND AND PURPOSE

The QCPR calls for the UN development system to expand and strengthen its work on gender equality and the empowerment of women including through the use of the Scorecard) as a planning and reporting tool for assessing the effectiveness of gender mainstreaming by UNCTs. The review of the MDGs/SDGs and post-2015 framework as well as the UN “fit for purpose” discussions also reinforce the need to tackle gender inequality, including its underlying causes, more holistically and with expertise cutting across UN entities; and focuses on ensuring the UN works together better to deliver results, evidence based analysis, and stronger monitoring and evaluation. The discussions taking place concerning the post-2015 SDG stand-alone goal on gender equality and the empowerment of women means that new impetus is needed for mainstreaming gender, including accountability tools such as the Scorecard and the UN-SWAP for implementation of the Chief Executives Board for Coordination policy on gender equality and the empowerment of women (CEB/2006/2). This context suggests that the Scorecard is now more important in UN programming than when it was designed and should be viewed in a reinvigorated fashion. The Scorecard establishes an accountability framework for assessing the effectiveness of gender mainstreaming by UNCTs, and is one part of the accountability framework that was developed in response to the UN Chief Executives Board for Coordination (CEB) 2006 Policy on gender equality and the empowerment of women (CEB/2006/2). This framework also includes the UN-SWAP, which is discussed in more detail in Section 4. The Scorecard was developed by the UNDG (UNDG) Gender Equality Task Team, and endorsed by the UNDG Principals in 2008. It was rolled out in 2008, with a first desk review of 20 Scorecards undertaken in 2011. Following the first baseline review of Scorecards, the UNDG Gender Equality Task Team, led by UN Women, has decided to undertake a second review covering the period 2012-2014.2 The objective of this second review is to:

• Analyze trends in UNCTs’ performance on gender since the baseline review took place.

• Assess strengths and weaknesses of UNCTs vis-à-vis the performance dimensions of the Scorecard.

• Make recommendations as to how identified strengths can be built on and how gaps can be filled.

• Identify good practices in the strategic use of Scorecards by UNCTs, and linkages with common strategic planning at the country level.

• Review the Scorecard content and process and determine how this can be improved, including with reference to further alignment with UN-SWAP indicators.

Section 2 of this report sets out its methodology. Section 3 synthesizes results from the data set for 2012-2014, in comparison to the 2008-2011 results, mainly focusing on quantitative results. Section 4 complements Section 3 by carrying out a qualitative analysis examining the Scorecard implementation process and how and why it has been effective, with a focus on good practice. Section 5 sets out recommendations, including suggested changes to the Scorecard process.

2 The review was undertaken by Dr. Tony Beck, a consultant to UN Women.

2

2. METHODOLOGY

Nineteen Scorecards are included in this review, as opposed to 20 for the 2008-2011 review, so the samples are roughly comparable (as the first Scorecard was not completed until late 2008, so in practice the earlier review also covered a three year period). The countries included in this review can be found in Table 1. Of these 19 countries, the Scorecards for Cambodia and India had not been formally approved by the UNCT at the time of writing, but were included because they had complete reports. A similar analysis was applied to the 2008-2011 data set, with the addition of a comparison to findings from the earlier data set, as well as a regional analysis. This review also included interviews with six regional UN Women staff, 20 others involved with the Scorecard - country office UN Women and Resident Coordinator Office staff, staff in the UN Women Coordination Division, and consultants who have implemented the Scorecard. A list of interviewees is included as Annex 1. A limitation of this review is that it proved challenging to conduct interviews with staff from other entities than UN Women. Attempts were made to set up interviews with staff from other entities and Resident Coordinator Offices but these were unsuccessful, which is perhaps symptomatic of the perceived lack of ownership by other entities. Of the 39 countries for which Scorecard reports are available, the regional breakdown is as follows:

• Asia and the Pacific (AP): 11

• West and Central Africa (WCA): 3

• East and Southern Africa (ESA): 11

• Europe and Central Asia (ECA): 8

• Americas and the Caribbean (LAC): 5

• Arab States/North Africa (AS): 1 Findings from the regional analysis should therefore be read in light of the relatively small number of Scorecards having been completed in WCA and AS. Three of the Scorecards (Colombia, Nepal and Sudan) only provided aggregate rating by area as opposed to by indicator, and this has been taken into account in the analysis. As noted in Section 3, 18 per cent of UNDAF roll-out UNCTs completed the Scorecard between 2012 and 2014, so the question arises as to the representativeness of the data in this report. While the data set may not be fully representative of all UNCT work on gender mainstreaming (including other reviews such as gender audits) it does make up some 25 per cent of countries in which the UN is active. The consistency in Scorecard ratings over seven years suggests that the findings of this report can be used to assess UN’s work on gender mainstreaming at country level as a whole.

3

3. QUANTITATIVE FINDINGS

This Section provides a quantitative analysis of Scorecard results with a focus on a comparison between the previous and current data set. Table 1 includes countries that undertook the Scorecard exercise for the 2012-2014 period. Table 1: Scorecard uptake, 2012-20143

Country Roll out year Completed International/ National consultant

Albania 2015 March 2014 International

Bolivia 2011 April 2013 National

Cambodia 2014 July 2014 International

Cameroon 2016 November 2012 International

Colombia 2014 February 2013 National

Guatemala 2013 January 2013 National

India 2016 December 2013 National

Indonesia 2014 June 2012 International

Jordan 2016 September 2012 National

Kenya 2013 May 2012 International

Kosovo 2014 March 2014 National

Malawi 2015 December 2013 Internal

Maldives 2014 April 2012 International

Nepal 2016 December 2013 International

Rwanda 2016 September 2011 International

Sudan 2015 February 2012 International

Timor Leste 2013 May 2013 International

Vietnam 2015 May 2011 International and national

Zimbabwe 2014 November 2011 International and national

Ten out of 56 UNDAF roll out countries between 2012 and 2014 completed the Scorecard, or 18 per cent, and a further nine countries completed the exercise from other roll-out years. Six Scorecards were completed by national consultants, ten by international consultants, two jointly by national and international consultants, and one internally; this is roughly similar to the 2008-2011 data set. Although the Scorecard was set up so that it could be implemented by national consultants, in order to use and support national capacity, a majority of UNCTs are using international consultants. This is either because national consultants are not available or because the profile of an international consultant was required. Table 2 shows average ratings across the eight Scorecard areas for 2008-2011 and 2012-2014. The Scorecard uses a six point rating system: 5 = Exceeds minimum standards 2 = Inadequate 4 = Meets minimum standards 1 = Missing 3 = Needs improvement 0 = Not applicable

3 The OPT and Jordan undertook the Scorecard exercise in 2014; the final reports were not available for this review and are not included in the data analysis, however interviews were carried out with UN Women and RCO staff in the OPT and Jordan. The UNCTs included in the 2008-2011 review are: Armenia, Azerbaijan, Bhutan, Bosnia and Herzegovina, Cambodia, Cape Verde, Comoros, Ecuador, Ethiopia, Eritrea, Fiji, Macedonia, Malawi, Mali, Mozambique, Samoa, Serbia, Somalia, Tajikistan, and Venezuela.

4

Table 2: Average Scorecard ratings for 39 UNCTs, 2008-2011, and 2012-2014 Scorecard area Average rating 2008-11 Average rating 2012-14

1. Planning 3.3 3.3

2. Programming 3.67 3.92

3. Partnerships 2.95 3.15

4. UNCT capacities 3 2.9

5. Decision-making 3.4 3.7

6. Budgeting 2.5 2.6

7. Monitoring and Evaluation 2.8 2.8

8. Quality control and accountability 2.7 3.2

Table 2 illustrates improvement in the majority of areas including programming, partnerships, decision-making, and quality control and accountability. However, it also demonstrates limited or no progress related to UNCT capacities, budgeting and monitoring and evaluation. A “4” rating is the agreed minimum standard for the UN system, and UNCTs as a whole do not yet meet this standard in any of the Scorecard dimensions. At the rate of progress achieved since 2008 it would be many years before the UN system meets minimum standards in a number of the indicators. However, it is close to achieving that goal for some indicators (see Table 3 below). This is in contrast to the UN-SWAP, where the UN system as a whole saw an aggregate increase on reported UN-SWAP indicators from 31 to 42 per cent between 2012 and 2013. This is partly due to the UN-SWAP methodology, which facilitates a gradated improvement for the whole UN system due to the low level of achievement required for certain Performance Indicators. Activities related to the UN-SWAP, such as introduction of gender elements into performance assessment, mandatory training, and development of gender markers will support improvement in the Scorecard. It should be noted that averages can be deceiving, in particular given the way in which the Scorecard minimum standards are formulated. This makes meeting some standards challenging. For example the requirement in indicator 1e that all key data be disaggregated by sex has been met by almost no UNCTs, which has brought down the overall average. There is also considerable good practice in the UN system. For example, 13 per cent of ratings for Scorecard area one on the CCA/UNDAF and 10 per cent of ratings for area two on programming exceeded minimum standards. Table 3 provides a disaggregated analysis of strengths and weaknesses of the UN system by comparing average ratings by indicator for 2008-2011 and 2012-2014. Table 3: Scorecard ratings by indicator

Scorecard dimension Average

rating 2008-2011

Average rating

2012-2014

1.a - Adequate UNCT review of country context related to gender equality and women’s empowerment

3.3 3.4

1.b - Gender equality and women’s empowerment in UNDAF outcomes 3.7 3.9

1.c - Gender equality and women’s empowerment in UNDAF outputs 3.3 3.9

1.d - Indicators to track UNDAF results are gender-sensitive 3.5 3.4

1.e - Baselines are gender-sensitive 3 2.7

2.a - Gender perspectives are adequately reflected in joint programming 3.9 4

2.b - Joint programmes 3.6 3.8

2.c - UNCT support for national priorities related to gender equality and women’s empowerment

3.8

4.2

5

Scorecard dimension Average

rating 2008-2011

Average rating

2012-2014

2.d - UNCT support to gender mainstreaming in programme based approaches

3.4 3.7

2.e - UNCT support to gender mainstreaming in aid effectiveness processes 3.4 3.8

3.a - Involvement of National Machineries for Women / Gender Equality and women’s departments at the sub-national level

3 3.5

3.b - Involvement of women’s NGOs and networks 2.6 2.8

3.c - Women from excluded groups included as programme partners and beneficiaries in key UNCT initiatives

2.6 3.3

4.a - Multi-stakeholder Gender Theme Group is effective 3.1 3.4

4.b - Capacity assessment and development of UNCTs in gender equality and women’s empowerment programming

2.7 2.5

4.c - Gender expert roster with national, regional and international expertise used by UNCT members

3 2.9

5.a - Gender Theme Group coordinator is part of UNCT Heads of Agency group

In 9 out of 16 cases

In 9 out of 15 cases4

5.b - UNCT Heads of Agency meetings regularly take up gender equality programming and support issues

3.4 3.7

6.a - UNCT Gender responsive budgeting system instituted 2 2.1

6.b - Specific budgets allocated to stimulate stronger programming on gender equality and women’s empowerment

3.2 3

7.a - Monitoring and evaluation includes adequate attention to gender mainstreaming and the promotion of gender equality and women’s empowerment

2.8 2.8

8.a - CCA/UNDAF quality control 2.7 3.2

3.1 Strengths of the UN system at the country level The most notable strength is perhaps the improvement related to the UNDAF. Between 2012 and 2014 UNCTs as a whole came close to meeting the following Scorecard indicators:

• One UNDAF outcome clearly articulates how gender equality will be promoted (rating 3.9).

• One third to one half of UNDAF outputs clearly articulate tangible changes for rights holders and duty bearers, which will lead to improved gender equality (rating 3.9).

The achievement in relation to the UNDAF results statements is spread regionally. This means that almost all new UNDAFs where Scorecards have been completed now have at least one outcome related to gender equality and the empowerment of women (GEEW), with four UNCTs having more than one gender-related outcome (India, Indonesia, Rwanda and Vietnam). These outcomes are also being supported by appropriate outputs. The formulation of these gender sensitive results statements is also supported by greater involvement from UN partners in UNDAF planning, particularly from national women’s machineries and excluded women, although less so from women’s NGOs and networks (Table 3, indicators 3.a, b, and c). This suggests greater input and ownership by some national actors. On the other hand, indicators and baselines to track UNDAF results statements (Table 3, 1.d and 1.e) are less evident in the latter period. Additionally there has been no improvement in monitoring and

4 Data was missing in four cases for both data sets.

6

evaluation, although this may be a consequence of generally poor UNCT monitoring and evaluation rather than being specific to GEEW. In DaO contexts, the inclusion of gender related outcomes and outputs in the UNDAF is likely to be correlated with improved performance on gender, with strategic planning ensured by joint results groups driving capacities and resources to promote cross-cutting themes.5 Other strengths are the inclusion of gender perspectives in joint programming and programmes, with 12 UNCTs in 2008-2011 and 13 in 2012-2014 achieving the minimum standard. Support to national priorities and gender mainstreaming in programme based approaches and aid effectiveness processes have also improved, and are above or close to minimum standards in most cases. Support to national priorities was the highest rated of any indicator (4.2), with six UNCTs exceeding the standard (Albania, Cameroon, Indonesia, Kenya, Rwanda, and Zimbabwe). There has also been some improvement in accountability mechanisms, with UNCT Heads of Agency meetings regularly taking up gender equality and support issues more often (rating of 3.7 in 2012-14, as opposed to 3.4 in 2008-11), and greater screening of the CCA/UNDAF at the regional level, although remaining at a relatively low level (increase from 2.7 in 2008-2011 to 3.2 in 2012-2014). 3.2 Weaknesses of the UN system at the country level Review of country context related to GEEW (1a) remains unsatisfactory in most cases. Only 6 UNCTs met or exceeded the minimum standard, with 5 of these in the AP region. This is surprising given the attention to gender equality in UNDAF outcomes and outputs, and may be due to a disconnect between country analysis and development of the UNDAF. Capacity assessment and development (4b) declined between the two review periods, and with a rating of 2.5 is one of the worst performing indicators. This is an area that needs urgent attention and should be addressed, including through the rollout of the GEEW e-module led by UN Women with the participation of several UN entities. There has also been limited change in gender related budgeting. The indicator on gender responsive budgeting (6a) has been the most challenging of the Scorecard indicators, because UNCTs do not function on the basis of joint budget. Allocations to Resident Coordinator Offices (RCOs), and Joint Programmes budgets could be used for the analysis. The move towards a Common Budgetary Framework within the UNDAF may make this indicator more relevant; however as entities introduce their own gender marker systems under UN-SWAP requirements a decision will need to be made as to whether there should also be a gender marker for UNDAF budgets (see Section 4 for further discussion). The decline in the number of specific budgets for GEEW (6.b) is of particular concern (although the review was not able to determine if the actual figure for GEEW had increased or decreased), and the inclusion of gender outcomes and outputs in the UNDAF does not appear to have translated into increased resources. In sum, of the four key elements for promoting gender mainstreaming, there have been improvements in two – accountability (5.b and 8.a) and strategic planning (1.b and 1.c) – and reverses in two - capacity development (4.b) and resource tracking and allocation

5 UNDG (2014) Standard Operating Procedures for Countries Adopting the Delivering as One approach. New York: UN Development Group.

7

(6.a and 6.b). Resource tracking should improve slowly between now and 2017 as a result of activity on gender markers generated by the UN-SWAP. 3.3 Regional analysis This Section covers regional performance between 2008 and 2014 with a focus on the four regions that have completed five or more Scorecards (AP, ESA, ECA, and LAC). Table 4 sets out ratings by region for Scorecard areas and indicators. Table 4: Scorecard ratings by region, area and indicator (number of Scorecards), 2008-2014

Scorecard area and indicator AP (11)

WCA (3)

ESA (11)

ECA (8)

LAC (5)

ASRO (1)

1.a - Adequate UNCT review of country context related to gender equality and women’s empowerment

3.95 3 2.88 3.25 3 3

1.b - Gender equality and women’s empowerment in UNDAF outcomes

4.3 4 4 3.15 4 3

1.c - Gender equality and women’s empowerment in UNDAF outputs

3.85 3.5 3.13 3.65 3.5 3

1.d - Indicators to track UNDAF results are gender-sensitive

3.75 3 3.13 3.75 3.25 3

1.e - Baselines are gender-sensitive 3.35 3 2.13 3.35 2.5 N/A

Average for area 1 3.84 3.3 3.05 3.43 3.35 3

2.a - Gender perspectives are adequately reflected in joint programming

4.35 3 4.13 4 3.5 2

2.b - Joint programmes 3.8 2 3.88 3.85 3.75 4

2.c - UNCT support for national priorities related to gender equality and women’s empowerment

4.25 4.5 4.13 4.1 3.5 3

2.d - UNCT support to gender mainstreaming in programme based approaches

3.95 3.5 3.88 3.3 3.25 3

2.e - UNCT support to gender mainstreaming in aid effectiveness processes

3.85 3.5 3.6 3.75 3.75 3

Average for area 2 4 3.3 3.9 3.8 3.6 3

3.a - Involvement of National Machineries for Women / Gender Equality and women’s departments at the sub-national level

3.75 4 3.38 2.75 3 3

3.b - Involvement of women’s NGOs and networks

3.1 3.5 2.53 2.75 2.5 1

3.c - Women from excluded groups included as programme partners and beneficiaries in key UNCT initiatives

2.3 1.5 3.5 2.9 3 3

Average for area 3 3.5 2.68 3.04 2.78 2.87 2.33

4.a - Multi-stakeholder Gender Theme Group is effective

3.5 4 3.13 2.6 3.25 N/A

4.b - Capacity assessment and development of UNCTs in gender equality and women’s empowerment programming

2.85 2.5 2.25 2.6 2.75 2

4.c - Gender expert roster with national, regional and international expertise used by UNCT members

3.65 3 2.38 3.1 2.75 1

Average for area 4 3.3 3.2 2.6 2.8 2.9 1.5

8

Scorecard area and indicator AP (11)

WCA (3)

ESA (11)

ECA (8)

LAC (5)

ASRO (1)

5.a - Gender Theme Group coordinator is part of UNCT Heads of Agency group

5/8 2/2 4/8 5/8 2/4 0/1

5.b - UNCT Heads of Agency meetings regularly take up gender equality programming and support issues

4.1 3 3.5 3.6 3 3

Average for area 5 N/A N/A N/A N/A N/A N/A

6.a - UNCT Gender responsive budgeting system instituted

2.45 1 2.25 2.4 2 1

6.b - Specific budgets allocated to stimulate stronger programming on gender equality and women’s empowerment

3.1 3.5 3.13 3.25 2.5 2

Average for area 6 2.8 2.3 2.7 2.8 2.3 1.5

7.a - Monitoring and evaluation includes adequate attention to gender mainstreaming and the promotion of gender equality and women’s empowerment

3.5 2.25 2.25 3.15 3.35 2

8.a - CCA/UNDAF quality control 4.05 2.25 2.55 2.5 3.35 3

Notable features of Table 4 are:

• The performance of AP UNCTs, which rated highest or equal highest in 16 out of 22 Scorecard indicators, and all of the Scorecard areas. As 11 Scorecards have been included from this region this result is unlikely to be caused by methodological bias. It was not possible through interviews or other data analysis to determine why the AP Region performed better than other areas, but one reason may be greater capacity at Regional and UNCT level in the AP Region.

• Similar performance of the other regions (excluding the AS Regional Office where only one Scorecard has been completed). Some regions were stronger in some of the Scorecard areas, but upon aggregation of all Scorecard areas, the achievement of regions other than AP was similar.

• Roughly similar performance in the areas of programming, partnerships, capacities, budgeting, but greater variation in decision-making and particularly monitoring and evaluation.

There are at least two implications of these regional findings. The first is that UNDG Regional staff can use Scorecard findings to tailor support to the country level, e.g. where a region is scoring poorly in capacity development then the Regional Office can focus on this area. The second is that there is potential for inter-regional learning and support, in particular concerning the transfer of good practice. To date a significant source of knowledge transfer in relation to the Scorecard has been UN Women staff moving between regions, or through the Scorecard Help Desk. However, there is significant potential for greater knowledge exchange about both Scorecard process and results. The potential role of Regional UNDG staff is further defined in Section 4.8. 3.4 Delivering as One UNCTs It might be expected that the DaO process would improve gender mainstreaming because it would lead to a more coherent focusing on UNDAF priorities. Of the eight DaO pilots, five - Albania, Cape Verde, Mozambique, Rwanda, and Vietnam –

9

completed the Scorecard. Results for these UNCTs in comparison to the other 34 that completed the Scorecard between 2008 and 2014 can be found in Table 5. Table 5: Rating by area for Delivering as One UNCTs, 2008-2014

Scorecard area Rating by area for five DaO UNCTs

Rating by area for non- DaO UNCTs

1. Planning 3.4 3.28

2. Programming 3.97 3.77

3. Partnerships 3.27 3.01

4. UNCT capacities 3.21 2.93

5. Decision-making 3.2 3.65

6. Budgeting 2.8 2.58

7. Monitoring and evaluation 2.8 2.82

8. Quality control and accountability 2.4 3.12

Table 5 demonstrates that the DaO UNCTs have performed better than non-DaOs in five Scorecard areas – planning, programming, partnerships, UNCT capacities and budgeting. The results of one area – monitoring and evaluation – are roughly equivalent. Where there are better results for DaO UNCTs, the differential is consistent across areas. Table 5 provides evidence to suggest that the DaO process has overall supported gender mainstreaming within UNCTs. 3.5 Percentage of countries meeting minimum standards in at least half of Scorecard areas The QCPR included a tracking indicator related to the Scorecard: “per cent of countries conducting the gender Scorecard that meet minimum standards (rating 4) in at least half of the gender Scorecard areas”. Table 6 sets out performance vis-à-vis this QCPR tracking indicator. Table 6: per cent of countries meeting Scorecard minimum standards in four Scorecard areas

Year Number of completed Scorecards

% of countries meeting minimum standards in at least half of Scorecard areas (country)

2008 3 33 (Bhutan)

2009 11 8.3 (Cambodia)

2010 6 0

2011 3 33 (Vietnam)

2012 6 16.7 (Indonesia)

2013 6 16.7 (India)

2014 4 0

The overall average for the seven years is 13 per cent. At least three UNCTs have achieved the QCPR indicator each year since 2008 – except for in 2010 and 2014 when none did. Achievement by year is irrespective of the number of UNCTs undertaking the exercise. Perhaps most noteworthy is that all of the UNCTs meeting the QCPR indicator are in the AP region, again demonstrating the superior performance of UNCTs in that region. The 2015 Secretary-General report on the implementation of the QCPR (A/70/62–E/2015/4) has a 52 per cent value for the QCPR (see indicator 36), suggesting a significant discrepancy between ratings in Scorecard reports and the information provided by RCs/UNCTs in DESA administered surveys.

10

4. QUALITATIVE FINDINGS

4.1 The Gender Scorecard process When the Scorecard was developed and piloted in 2006-2008 it was mainly conceptualized as an exercise for holding the RC and UNCT accountable for gender mainstreaming. This has meant that the Scorecard is usually an external exercise involving a consultant visiting the UNCT for a week or ten days. Several respondents noted the value of the Scorecard rating system, which clearly sets out a globally agreed set of minimum standards. It is not, in the words of one respondent: “UN Women telling the UNCT what to do”. Other respondents noted, however, that the Scorecard is an inflexible tool that leaves limited room for interchange and dialogue about gender mainstreaming. In total five out of 20 respondents were satisfied with the current process, which had worked well with their UNCTs or regions. The remainder suggested various changes, as follows:

• A more participatory approach involving UNCTs and partners earlier in the process. This was the view voiced by the majority of regional and country level respondents - 12 out of 20. One respondent put it as follows: “I would suggest that the Scorecard emphasize the need for a participatory process, without which it is unlikely that the UN system will fully support the scoring and the recommendations. And as with any joint initiative, I would suggest that the exercise is also presented as a means not only to an end, but to building capacities and shared thinking on gender priorities.”

• Making the Scorecard exercise mandatory to ensure UNCT buy-in and follow-up—voiced by two respondents. One said: “It needs to have consequences – this would be unwelcome to the entities but would support compliance – right now it’s a secondary topic.”

• Implementing the Scorecard when the UNDAF cycle was close to completion, and at the beginning of the new UNDAF cycle. The end of the UNDAF cycle coincides with the planning phase for the new cycle, which is why it makes sense to undertake the exercise at that time, because as well as assessing past performance it will feed into planning for the new cycle.

Making the exercise mandatory was discussed during Scorecard development, and was also a recommendation of the previous Scorecard review. During Scorecard development it was decided to not make the process mandatory to ensure that there was no opposition from within the UNDG. A similar decision was made during the last QCPR. The advantages of a mandatory process would likely be increased uptake, but not necessarily greater buy-in from UNCTs. As the idea of making the Scorecard mandatory was voiced by only a small minority of respondents it has not been followed through to the recommendations for this report. In addition to requests for changes from respondents, the UNDG Gender Equality Task Team should revise the current process for the following reasons: Level of uptake by UNDAF roll-out countries. Some 25 per cent of roll-out countries have implemented the Scorecard exercise since 2008 (Table 1 and discussion), whereas the original intention was that all UNCTs implement it periodically. This was

11

clear in the UNDG Chair’s letter to RCs in 2008: “The indicators are for use by all UNCTs. In particular, we are asking UN Country Teams that are developing UNDAFs for 2008 and 2009 to establish a baseline using the performance indicators, so that they can measure changes over the period of the next UNDAF. The UNDG will monitor their use and make adaptations as needed, based on feedback from the UNCTs. Eventually, every UNCT should be using these indicators systematically as an internal accountability mechanism, and to identify where progress is made or additional support is needed.” Systematic uptake by all UNCTs has not happened for at least six reasons. 1) The Scorecard exercise is not mandatory; 2) lack of/or limited capacity to implement the Scorecard; 3) Increasing pressure on UNCTs to deal with competing requests and expectations, with declining resources available, 4) the Scorecard may be perceived as an accountability tool through which UNCTs are mainly given (often negative) ratings, as opposed to using its findings to inform future looking strategic planning, 5) UNCTs may have decided to adopt other accountability mechanisms such as gender audits, and 6) the exercise has not been fully integrated into UNCT work-plans and processes. On the last point one respondent said, echoing the majority view: “The Scorecard has been seen as ‘UN Women’s business’, so it has been marginalized. Everyone looks at UN Women as being the gender agency in the region.” On UN capacity for implementation, there has been relatively limited investment in Scorecard implementation in staff time and financial resources. Central support to the Scorecard did not include dedicated full time capacity at UN Women HQ, and one consultant for approximately 25 days a year. This is in contrast to the UN-SWAP, where CEB approval and injection of resources from both UN Women and other entities has bolstered uptake; support to the implementation of the UN-SWAP included full time staff in UN Women, consultancy services, as well as in-kind support from other entities. Unsurprisingly the under-investment in the Scorecard has led to limited uptake as compared to what the UNDG originally envisaged. Resistance to the exercise at UNCT level. In a minority of cases where there was agreement to implement the Scorecard, UNCTs did not participate fully, e.g. Heads of Agencies were not available for interviews and/or there was delayed response to approval of the Scorecard report and recommendations. This was also partly a result of staff turnover. Follow-up and management responses. Follow-up to recommendations has not been systematic or mainstreamed, and has been largely as a result of UN Women’s initiative. This is issue is dealt with in more detail in Section 4.2. 4.1.1. Should the UNDG introduce a more participatory process? The UN accountability landscape on gender equality has changed since the introduction of the UN-SWAP, which promotes accountability of senior managers. The question now is whether a specific accountability mechanism at UNCT level is still needed, and/or whether the Scorecard could become a more participatory exercise. UNCTs and consultants have taken the initiative to adapt the methodology set out in the Scorecard Users’ Guide and introduced more participatory practices – see Box 1 for examples.

12

Box 1: Example of participatory practices introduced to Scorecard implementation One respondent noted: “The black and white scoring in the Scorecard meant that the UNCT was going to look very bad; there was also a negative feeling about the process because the scoring approach was so dogmatic and there is very little space for feedback. So we decided to focus much more on the interviews and saw real value in opening up space for dialogue – during the interviews many respondents realized for the first time that they needed to take gender equality issues into account.” In the cases of Cameroon and Paraguay, the UNCT and consultant introduced participatory workshops with the GTG and other staff to open and close the Scorecard process. In the introductory workshop participants were divided into four groups, and each group was asked to self-assess UNCT performance against two Scorecard areas. This increased understanding of the Scorecard purpose facilitated dialogue around the UNCT’s performance. The consultant was then able to triangulate ratings from this exercise with ratings from interviews and the CCA/UNDAF and Joint Programme review. The process closed with a participatory workshop for developing the follow-up action plan. This focused on the Scorecard areas that had shown worse performance and groups in the workshop were given two areas for which they needed to produce an action plan. This has likely led to improved buy-in to recommendation follow-up. In Cambodia a participatory assessment tool was developed through which GTG members led an internal review process within their entities to generate inputs to, and build consensus on, their entity's ratings against the Scorecard, validating the final version with their Country Representatives. Each entity then presented the findings at the GTG annual retreat, where entities engaged in discussion in groups and in plenary to identify similarities across entities vis-à-vis strengths and weaknesses, culminating in the identification of key priorities and recommendations (opportunities/threats; for the short-term to medium-/long-term), and an action plan for the UNDAF Roll-Out. For example UN commitment to national priorities for GEEW was assessed against a set of national priorities, based on CEDAW and a gender analysis of key national issues. This participatory process enabled the further engagement of UNCT members in assessing their entity's performance, and contributed towards securing their inputs and buy-in to recommendations. In Timor Leste the Scorecard process effectively engaged technical level officials from several government Ministries, including the Ministries of Defense, Justice and Finance. In many Scorecard exercises interviews with government are only with the Women’s Machinery, and other ministries have not contributed.

Respondents noted Scorecard implementation can be an important opportunity to increase understanding of and momentum towards gender equality, particularly if there is participation in the process. Greater participation should also lead to more buy-in to recommendations and follow-up. There is some evidence that this is the case, however, it was difficult to track this systematically as few UNCTs have rigorous follow-up methods.

13

The disadvantages of a more participatory process include potentially losing the Scorecard accountability focus, and a potential need for a greater investment in time and resources to implement the Scorecard exercise. 4.2 Follow-up to recommendations The 2008-2011 Scorecard review found that less than 50 per cent of report recommendations had been followed up. For 2012-2014, five out of 17 UNCTs had a formal management response (i.e. Albania, Rwanda, Maldives, Vietnam and Timor Leste).6 In the majority of cases, follow-up depended on the GTG developing a work plan rather than this being mainstreamed in other UNDAF processes including UNCT work plans and RC Annual Reports/performance appraisal. Lack of leadership at the highest level and weak inter-agency coordination structures for the UNDAF are also major constraints to follow-up in some cases. This again contrasts with the approach of the UN-SWAP, where it is primarily staff other than gender focal points who are responsible for action. There was also a lack of structured follow-up, e.g. through a yearly report discussed at the Heads of Agencies level on progress in implementing recommendations, although this is happening in a minority of countries. The management response table included in the Maldives report could be adapted to become a standard template for ensuring follow-up to key recommendations by UNCTs.

There were examples of good practice, and tying of the Scorecard results to the common strategic planning process, noted here:

• In Kenya the Scorecard implementation was funded under the Joint Programme on Gender Equality and the Empowerment of Women, which included results statements related to advancing UN coordination and coherence for gender equality. The Annual Work Plan of the Joint Programme, which was approved each year by the UNCT, then responded to the recommendations in the Scorecard report. It was also reported that the Scorecard had focused systematic attention on gender mainstreaming in the UNDAF development process. Gender specialists were located in each of the UNDAF outcome groups, although as there are 10 outcomes the gender specialists were stretched.

• In Cambodia in response to the Scorecard findings a Gender Scorecard Action Plan was developed, which the GTG was responsible for implementing under

6 In the case of two of the Scorecards reviewed, it was too early in the process for a management response.

Key Recommen-dations Extract from the Scorecard report.

Management Response Tracking

Response

Are the key recommendations acceptable? If no,

why not?

Key Actions

What are the concrete proposed

actions? Who are the key partners in carrying out

the actions?

Time frame

Responsibility

Who will be responsible for implementing

the key actions?

Status

Completed

Partially completed

Pending

Comments

e.g., brief explanation

for any delays

14

UNCT leadership. The UNCT reported that it had implemented many of the Scorecard recommendations related to the 2016-2018 UNDAF. Cambodia is also of interest as one of the only two UNCTs to undertake the Scorecard exercise twice (the other is Jordan). Respondents reflected that during the second round of Scorecard implementation, the process moved from one involving a consultant working in isolation to a participatory process that strengthened the GTG and coordination mechanisms, with a shifted focus of the Scorecard towards supporting and informing strategic prioritization for joint programming. The introduction of a dedicated UNDAF outcome on gender also aided the second Scorecard process.

• In Albania the Resident Coordinator reviews the Scorecard work-plan developed by the GTG on a half yearly basis to ensure implementation of recommendations.

Table 7: Quality of recommendations, 2008-2011 and 2012-2014

2008-2011 2012-2014

Five out of 20 reports included timelines, responsibilities and budgets with recommendations, as suggested in the Scorecard Users’ Guide; and a further five reports included details on timelines and responsibilities. Overall there were too many recommendations (some reports included thirty or more), and a number of recommendations were not clearly defined.

Nine out of 19 reports included timelines, responsibilities and budgets with recommendations. The remaining 10 reports included two out of three above – timelines and responsibilities were missing in 6 cases, and budgets in 7 cases. The number of recommendations per report decreased, but recommendations could still be more clearly defined.

Table 7 demonstrates that the quality of recommendations has improved. The number of recommendations has decreased, and there is greater inclusion of the three elements for follow-up. However, in a number of cases recommendations are still being expressed too vaguely to be actionable items. Examples of clear, concise and feasible recommendations from the Cameroon and Kenya reports are included in Annex 3. As part of updating the Scorecard, further guidance should be provided to UNCTs, GTGs and consultants concerning formulation of recommendations and follow-up plans. 4.3 Knowledge sharing Country level respondents noted that one main type of support they would like to receive from Regional Offices and HQ related to comparative experience. That is, how the Scorecard was conducted in other countries, including examples of good practice, participatory processes, and follow-up to recommendations. Respondents said that they needed examples of good practice in order to get buy-in from their UNCT. There is also now the potential to aggregate Scorecard findings at the regional level, so that regional level staff can identify areas where UNCTs need the most support. This should include all relevant UNDG Regional staff and not only UN Women staff. Up until May 2013 the main source of knowledge sharing was through the Scorecard Global Help Desk, which was supported by a consultant; the Scorecard process has now had over 18 months without this central support system, which included bringing in comparative experience from Scorecard implementation as a whole, providing details on

15

consultants, technical support in Scorecard implementation to UNCTs and consultants, and reviewing draft reports. Closing the Help Desk has significantly decreased coherence in Scorecard implementation as well as knowledge sharing.7 Several respondents who had received the 2008-2011 Scorecard report said that they had found this very useful, particularly in being able to point out the comparative performance of their UNCT. As this 2008-2011 report was never formally approved and circulated, the UNDG Gender Equality Task Team lost the opportunity to promote knowledge sharing between regions and UNCTs. 4.4 The Gender Scorecard and other accountability mechanisms The review compared use of four accountability and learning mechanisms intended to promote gender mainstreaming: the Scorecard, the UN-SWAP, the ILO Participatory Gender Audit, and the UNDP Gender Equality Seal. This was done to provide recommendations to the UNDG on coherence and potential duplication, given that UNCTs require a clear understanding of the purposes of each of these mechanisms. Table 8 provides a summary of the main elements of each of these mechanisms. Table 8: Coherence and complementarity between four accountability and learning gender mainstreaming mechanisms

7 Disclosure: the consultant writing this report also developed the Scorecard with the UNDG Gender Equality Task Team and provided support through the Scorecard Help Desk between August 2008 and May 2013.

Scorecard UN-SWAP ILO Gender Audit UNDP Gender Equality Seal

Main purpose

Accountability of UNCTs, with all UNDAF roll-out UNCTs expected to implement the Scorecard

Accountability of UN agencies at HQ level, with a target of the UN system meeting/ exceeding UN-SWAP standards by end 2017

Accountability and lesson learning, initially intended for ILO Offices but also used more widely in the UN and by counterparts

Accountability and lesson learning for UNDP Country Offices and Regional Bureaux

Main focus

UNDAF, joint programming, joint processes, capacity, budgeting, monitoring and evaluation

Internal agency HQ main-streaming functions

Establishes a baseline of performance against 12 mainstreaming focus areas, assesses progress made, recommends way of addressing challenges

A corporate certification process that recognizes good performance of UNDP Country Offices, Regional Service Centers and Headquarters in delivering gender equality results

16

There are important similarities and differences between these accountability mechanisms. The UN-SWAP and Scorecard focus mainly on accountability, while the ILO Gender Audit and UNDP Gender Equality Seal are more participatory in approach. For example, the Scorecard rates UNCTs based on objective criteria, while Gender Audit uses a qualitative approach, striving to understand the perspectives and viewpoints of individuals within the organization. The Gender Audit is more subjective in nature, building on the premise that perceptions need to be changed in order to bring about improvement. The Gender Audit is structured as a learning tool with a strong participatory approach, encouraging reflection, analysis and collective thinking on gender. The self-assessment aspect is key to the whole exercise and marks a significant difference to the Scorecard.8 The UNDP Gender Equality Seal is the only mechanism to focus on actual development results as well as gender mainstreaming, and also to facilitate a process that involves building the capacity of country offices. It offers a more participatory model, some elements of which the Scorecard may wish to replicate. The Gender Equality Seal resembles the Scorecard in that it includes a set of minimum standards, but takes a participatory approach, working with UNDP Country Offices over six to eight months to achieve these standards. Further details are provided in Annex 2. The UNDG and IANWGE need to ensure complementarity between these different accountability mechanisms and promote understanding of their different purposes and

8 ILO (2011) ILO Participatory Gender Audit: Relevance and use for the United Nations and its agencies, Geneva.

Scorecard UN-SWAP ILO Gender Audit UNDP Gender Equality Seal

Reporting process

Consultant reports to the UNCT, including recommendations; finalized reports archived by DOCO

Agencies self-report at the HQ level, with reports cross-checked by UN Women. Results are aggregated by UN Women and feed into the Secretary-General’s Annual Report on gender main-streaming

Audit report includes recommendations

The external assessment reports on UNDP Country Office performance that determines their level of qualification – either gold, silver or bronze.

Uptake to date

Completed by at least 39 UNCTs since 2008

Completed three times (2012-2014) by some 90 per cent of UN agencies

Used extensively at ILO HQ and country offices, and by some other agencies at HQ and country levels

Piloted in three UNDP Country Offices and being rolled out to the organization in 2015 - to 31 Country Offices, the UNDP Pacific Regional Centre and the UNDP Regional Bureaux for Asia-Pacific

17

uses, including at the country level. As one respondent put it: “as there are several UN scorecard tools out there it might be helpful to identify these in the preamble to the Scorecard Users’ Guide to avoid confusion. Two UN entities initially questioned their participation [in the Scorecard process] as they had recently been involved in other Scorecard exercises.” This was already considered in the case of the Scorecard and Gender Audit at IANWGE’s Seventh Session in 2008, where the Gender Audit was considered to be a positive complement to the Scorecard. Further communications need to be disseminated by the UNDG concerning these complementary tools. 4.5 The Scorecard and the UN-SWAP The review looked in particular at the complementarities between the Scorecard and the UN-SWAP to help ensure that they are aligned and used optimally. The Scorecard was envisaged as part of a coherent three-point accountability framework for gender equality, as follows:

1. The UN-SWAP rolled out in 2012, and tied to the CEB system-wide gender equality policy.

2. The Scorecard rolled out in 2008 and with a focus on joint processes and institutional arrangements within the UNCT.

3. Accountability for the UN’s contributions to gender equality development results at country and normative levels. Whereas the UN-SWAP and UNCT Performance Indicators focus mainly on institutional capacity to deliver results, this mechanism is planned to focus on actual development results.9

During their development the UN-SWAP Performance Indicators were aligned with the Scorecard Indicators, using the same language. Although both measure performance on mainstreaming, the two frameworks, as noted in Table 8, have different purposes, processes, focus, and audiences. The main differences are:

• The UN-SWAP focuses on the individual agency level, while the Scorecard focuses on UNCT joint processes.

• The UN-SWAP process involves a self-assessment while the Scorecard involves an external consultant.

• The audience for the UN-SWAP is heads of agencies at HQ level, while the Scorecard audience is the UNCT.

Currently there is overlap in two areas – see Annex 4 for full details:

• Gender marker system. As all UN entities are required under the UN-SWAP to develop a gender marker system, all of the programmes making up the UNDAF should already be covered by such a system, excluding any Joint Programmes. In terms of resource allocation there is some overlap, although the UN-SWAP indicator refers to the total agency budget and the Scorecard indicator mainly refers to a specific budget for gender mainstreaming.

• Capacity assessment and development. This is the area of greatest overlap, as under the UN-SWAP all entity staff should receive gender related training.

9 UN Women (2012) UN System-wide Action Plan for implementation of the CEB United Nations system-wide policy on gender equality and the empowerment of women, pp 1-2, as approved by the Chief Executives Board for Coordination.

18

Of the total of 21 Scorecard indicators there are 14 with no equivalent in the UN-SWAP, including in relation to the specificities of UNDAF planning and the UNCT context which is quite different from the situation at HQ.10 Similarly, a number of UN-SWAP indicators – performance assessment, organizational culture, gender architecture, knowledge sharing, and coherence –are not included in the Scorecard. Given the differences between the two frameworks, a complete merger should be carefully considered as it may not be the most effective option. However, the Scorecard indicators on budgeting could be aligned with the UN-SWAP, and the indicators on capacity could be removed as they duplicate the indicator in the UN-SWAP. Additionally Scorecard data could be archived in the UN-SWAP web tool, and the processes of UN-SWAP implementation could provide useful learning for the future of the Scorecard. As noted, the UN-SWAP framework sets out plans for developing an accountability system for gender equality normative and development results. The original plan was to establish an inter-agency working group, chaired by UN Women, to steer the process and ensure coherence with the UN-SWAP and Scorecard. However, rather than developing an additional accountability tool, it may be more effective and practical to draw on the relevant parts of the SDGs indicator framework that will be finalized within the context of the post-2015 development agenda. 4.6 Humanitarian settings Five UNCTs that completed Scorecards between 2012 and 2014 were in countries in humanitarian or post-humanitarian situations – Colombia, Jordan, OPT, Sudan and Timor Leste. The 2008-2011 Scorecard review recommended including an Annex to the Scorecard for such situations. These additional humanitarian indicators were piloted during the implementation of the Scorecard in Sudan, however, the results of the piloting were not closely tracked and additional piloting should be considered. These humanitarian indicators are included again in this report as Annex 5, for consideration by the UNDG. Pending common guidance on this issue, individual UNCTs are developing their own indicators related to humanitarian settings, which raises issues of consistency of rating. 4.7 Scorecard content and UNDG Guidance Respondents had the following recommendations concerning revisions to the Scorecard report and indicators:

10 1.a - Adequate UNCT review of country context related to gender equality and women’s empowerment; 2.a - Gender perspectives are adequately reflected in joint programming; 2.b - Joint programmes; 2.c - UNCT support for national priorities related to gender equality and women’s empowerment; 2.d - UNCT support to gender mainstreaming in programme based approaches; 2.e - UNCT support to gender mainstreaming in aid effectiveness processes; 3.a – Involvement of National Machineries for Women / Gender Equality and women’s departments at the sub-national level; 3.b - Involvement of women’s NGOs and networks; 3.c - Women from excluded groups included as programme partners and beneficiaries in key UNCT initiatives; 4.a - Multi-stakeholder Gender Theme Group is effective; 4.c - Gender expert roster with national, regional and international expertise used by UNCT members; 5.a - Gender Theme Group coordinator is part of UNCT Heads of Agency group 5.b - UNCT Heads of Agency meetings regularly take up gender equality programming and support issues 8.a - CCA/UNDAF quality control.

19

• Align the Scorecard minimum standards with CCA/UNDAF guidance, e.g. the minimum standard for strategic planning in the Scorecard is that at least one UNDAF outcome and corresponding indicator should be gender-sensitive. This is not included in the CCA/UNDAF guidance, which has meant that some UNCTs have argued that they do not need to meet the Scorecard standard.

• Include a section in the report on the situation on gender equality in-country, to provide context for the Scorecard ratings.

• Ensure that there is consistency in definitions in indicators 1.a and 2.a as there is inadequate correspondence between the areas for rating at different levels.

• Ensure consistency between indicator 1.d and 1.e, as the measure for 1.d is 33-50 per cent, but 100 per cent for 1.e.

• Clarify what is meant by key sex-disaggregated data for indicator 1.e, which currently is almost impossible to meet.

• Clarify that for 1.d both requirements need to be met for a minimum standard rating (i.e. at least one indicator at outcome level, and between one third and one half of indicators at output level, are gender sensitive).

4.8 Roles and responsibilities Currently the roles of the different actors implementing the Scorecard are not clearly set out in the Scorecard Users’ Guide, except for the case of the Scorecard consultant. The UNDG can consider the following clarifications: At the UNDG level responsibilities should be for managing the support to UNCTs, developing a global Scorecard work plan, developing and updating technical tools and guidance, global reviews, and ensuring alignment with other accountability mechanisms. UNDG Regional Chairs should contact RCs/HCs concerning Scorecard requirements. Regional level UNDG staff should disseminate Scorecard information and guidance to UNCTs, and provide technical support in implementation including names of potential consultants. They should disseminate comparative experience as needed, carry out regional level analysis of Scorecard results, and aggregate Scorecard findings to tailor country level support in weaker areas. Follow-up to Scorecard report recommendations should be discussed by Regional Peer Support groups. At country level the UNCT should take the lead on the Scorecard exercise, through the GTG or equivalent. Funding should be made available from the RC’s budget or joint entity funding. Agency Heads should make themselves available for interviews, and the Scorecard findings and recommendations should be promptly discussed by the UNCT. The UNCT should also develop and track a follow-up plan to Scorecard recommendations.

20

5. Recommendations Institutional arrangements within the UNDG

1) Reconfirm the commitment and shared responsibility of UNDG Gender Equality Task Team members for promoting and supporting greater uptake of the Scorecard by UNCTs.

2) Feature the Scorecard as a standing item in the work plan of the UNDG Gender Equality Task Team, specifying deliverables, division of labor and funding requirements.

3) Promote greater participation of other UNDG entities in Scorecard implementation

4) Clarify the roles and responsibilities of UN staff at HQ, regional and country levels, drawing on the analysis in Section 4.8.

5) Liaise with other UNDG Working Groups that are developing new accountability/performance assessment tools for UNCTs (e.g. the UNDG Human Rights Mechanism), to ensure consistency across tools and mechanisms, and avoid overburdening of UNCTs with inconsistent requirements.

6) Liaise with relevant UNDG WGs to ensure that UNDG programming guidance to UNCTs draws on the Scorecard tool and this global review. This would include recognizing the implementation of the Scorecard as a key input into UNDAF monitoring and evaluation by UNCTs, and suggesting that the Scorecard is carried out at least once towards the end of every UNDAF cycle.

Updating and upgrading the scorecard tool 7) Revise and update the Scorecard performance areas and indicators to ensure

relevance and applicability by UNCTs, as well as consistency and alignment with the UN-SWAP. This would include reviewing and piloting the humanitarian indicators (see Annex 5).

8) Review the Scorecard methodology to make the process more participatory by drawing from good practices, as well as from methodologies applied to other existing tools (e.g. the UNDP Gender Equality Seal).

9) Further explore complementarities and opportunities for aligning the Scorecard with other existing mechanisms, including the UN-SWAP.

10) Develop a standard template – and related guidance - for Scorecard recommendations and management response, to ensure timely implementation of Scorecard recommendations by UNCTs and related review.

Communication and knowledge management 11) Make the current Scorecard review report available to the UNDG and UNCTs. 12) Upload all Scorecard reports implemented to date on the UNDG web site. 13) Update the Scorecard User’s guide available to UNCTs, including a short

description clarifying purpose and focus of different existing tools and accountability mechanisms currently in place.

14) Prepare a new communication package for RC/HCs reaffirming expectations of UNCTs to complete the Scorecard exercise, as confirmed in the QCPR, sharing relevant resources and support available.

15) Ensure learning and experience sharing for relevant UN staff around UNCTs performance assessment on gender equality, including through webinars with staff and experts who supported UNCTs with Scorecard implementation.

21

Annex 1: List of interviewees

Name Entity

Michele Ribotta UN System Coordination Division, UN Women HQ Report manager

Aparna Mehrotra UN System Coordination Division, UN Women HQ

Monica Dyer UN System Coordination Division, UN Women HQ

Fumie Nakamura UN Women Regional Office, ECA

Janneke van der Graaff Kukler UN Women Regional Office, AP

Themba Kalua UN Women Regional Office, ESA

Jayne Adams UN Women Regional Office, LAC

Florence Hamimi UN Women Regional Office, WCA

Alethia Jimemez UN Women Regional Office, AS

Clara Anyangwe UN Women, Rwanda

M. Janviere UN Women, Rwanda

Gitanjali Singh UN Women Nepal

Anastasia Divinskaya UN Women Timor Leste

David Saunders UN Women Albania

Estela Bulku UN Women Albania

Nyambura Ngugi UN Women Kenya

Flora Macula UN Women Kosovo

Silja Rajander UN Women Cambodia

Joana Urbea UN Women Colombia

Maria Paulina Garcia RCO, Colombia

Ailla Edin Ayesh UN Women Palestine

Marta Gabarino UN Women Jordan

Michael Schaadt RCO, Jordan

Maria Machado UN Women Guatemala

Francoise Coupal Consultant

Andrea Esser Consultant

22

Annex 2: UNDP Gender Equality Seal The UNDP Gender Equality Seal is a corporate certification process that recognizes good performance of UNDP Country Offices in delivering transformational gender equality results. It tracks, measures and certifies the competence and achievements in advancing women's rights and corporate gender equality goals. It’s a tool for empowering managers and accelerating changes needed to support countries’ gender equality goals. The Gender Equality Seal establishes minimum acceptable quality standards through 44 performance indicators and provides a clear framework to guide senior managers in linking gender equality in the workplace with development results. The Seal is also a learning platform that supports critical reflection, learning and innovative thinking on gender mainstreaming. Unlike gender audits or evaluations the Seal process creates an inclusive and open space for building a broad-based consensus around the goals and implications of gender mainstreaming. The Seal has for four steps: Step One is a self-assessment over 5 weeks using an online version of the standardized assessment tool that allows an office or unit to judge for itself where it stands with respect to the benchmarks, supported by the UNDP Gender Equality Seal Team. After completing the self-assessment, the Seal Team provides feedback and final results of the on line assessment to each applicant. Country Offices then receive recommendations on how to improve their scores and build an action plan for improvement. Step Two involves development of an Action Plan for improvement following the recommendations of the Seal Team during a one-month period. The implementation of the action plan to improve benchmarks then covers a total of six months with assistance from the UNDP Regional Service Centre or HQ Gender Team. Step Three involves a final assessment to identify the level of certification carried out over 4-5 days and will focus on validation of findings from the online self-assessment, as well as on assessment of gendered practices and overall impacts of the Country Offices gender work through interactions with staff, partners, government counterparts, civil society groups and media. The assessment process concludes with a detailed debriefing. In Step 4 a Certification Committee approves the suggested level of certification, and news of certification is circulated among UNDP colleagues worldwide. The Seal Team also offers post-certification support for the development of an implementation plan based on the lessons and strategic recommendations from the exercise. The Seal is valid for a period of two years, after which Country Offices are invited to apply for re-certification. Source: UNDP Gender Equality Seal. A Guide for Country Offices/Regional Service Centres/Regional Bureaux.

23

Annex 3: Examples of good quality recommendations Cameroon Engage Gender CSOs and Vulnerable Groups The UNDAF had very limited participation of civil society and vulnerable groups in its formulation. A number of strategies can be used to give voice to women’s rights groups and marginalized groups like poor women, persons with disabilities or youth. First, the Gender Task Force and Senior Gender Platform for Decision-makers recommended further participation of women rights groups and representation of vulnerable groups to give voice to their priorities and concerns in programme implementation, monitoring and evaluation. Actions to be taken:

• Identify key women’s rights groups and marginalized sectors of the population to be represented.

• Update TORs for Gender Task Force to include representation from Gender CSOs

• Include women’s rights groups and the marginalized in the Gender Task Force and Platform for Senior Decision-makers.

Timing: December 2012; Responsibility: UN Women and UNCT; Cost: $ 0 Rwanda Monitoring and evaluation The report found that the One Programme reporting on the results framework needed to be strengthened, and there was a lack of sex-disaggregated and baseline data. The latter is a common issue across UNCTs and the report recommended:

• Training of M&E Task Force members in the formulation of gender-sensitive indicators and use of gender impact assessments.

• M&E Task Force to formulate country specific sets of gender sensitive indicators at output level for tracking progress on gender equality policies and changes over time.

• Organizing M&E field visits.

• Continuing UNCT support to strengthening national systems for the systematic production of sex-disaggregated data.

• Undertaking a planned Gender Baseline Survey. Follow-up was planned as follows:

• Technical assistance: UN Staff College; UNDG; UNEG

• Time frame: 2011-2012

• Responsibility for follow-up: UNCT, M&E and Gender Equality Task Forces

• Resources: US$50,000

24

Annex 4: Overlap between the UN-SWAP and the Scorecard indicators

UN-SWAP Performance Indicators (meets requirements level)

Gender Scorecard Performance Indicators (meets minimum standard level)

1. Up to date gender equality and women’s empowerment, including gender mainstreaming and the equal representation of women, policies and plans implemented

No overlap

2. Assessment of gender equality and the empowerment of women integrated into core values and/or competencies for all staff, with a particular focus on levels P4 or equivalent and above

No overlap

3a. Gender analysis in the central strategic planning document and main country programme documents; and 3b. The central strategic planning document includes at least one specific outcome/expected accomplishment and one specific indicator on gender equality and women’s empowerment

No overlap; the Scorecard focuses on the CCA/UNDAF while the UN-SWAP focuses on the central strategic planning document at global level. The requirements in the Scorecard and UN-SWAP are at the same level.

4a. Reporting on gender equality and women’s empowerment results in relation to the central strategic planning document; and 4b. All key entity data is sex-disaggregated, or there is a specific reason noted for not disaggregating data by sex

No overlap

5. Meets the UNEG gender-related norms and standards

No overlap

6. Consultation takes place with the gender focal point/department on risks related to gender equality and the empowerment of women, as part of the risk based audit annual planning cycle

No overlap

7. Programme quality control systems fully integrate gender analysis

No overlap

8. Financial resource tracking mechanism in use to quantify disbursement of funds that promote gender equality and women’s empowerment

6a. The UNCT has clear plans for implementing a budgeting system to track UNCT expenditures for gender equality programming, with timelines for completion of the plan noted.

9. Financial benchmark for resource allocation for gender equality and women’s empowerment mandate is met

6b. Specific budgets to strengthen UNCT support for gender equality and women’s empowerment located for four of the below:

• Capacity development and training of UNCT members.

• Gender equality pilot projects.

• Support to national women’s machinery.

• Support to women’s NGOs and networks.

• Maintenance of experts’ roster.

• Gender mainstreaming in CCA/ UNDAF exercises (e.g. for the preparation of background documentation, gender analysis capacity building, technical resource persons, etc.).

25

UN-SWAP Performance Indicators (meets requirements level)

Gender Scorecard Performance Indicators (meets minimum standard level)

10a. Gender focal points or equivalent at HQ, regional and country levels are: a. appointed from staff level P4 and above for both mainstreaming and representation of women; b. have written terms of reference ; c. at least 20 per cent of their time is allocated to gender focal point functions; and 10b. The entity has reached the equal representation of women for General Service staff and also at P4 and above levels; and 10c. Gender department/unit is fully resourced according to the entity mandate

No overlap

11. Organizational culture fully supports promotion of gender equality and the empowerment of women

No overlap

12a. Entity-wide assessment of capacity of staff at HQ, regional and country levels in gender equality and women’s empowerment is carried out; and 12b. A capacity development plan is established or updated at least every five years

4b: Resident Coordinator systematically promotes, monitors and reports on capacity development activities related to gender equality and women’s empowerment

• Regular review of capacity of UNCT to undertake gender mainstreaming (e.g. once every two or three years).

• Training on gender mainstreaming takes place for all UNCT staff (one day every six months for new staff for first year, minimum of one day of training once every two years after this).

• Gender specialists and gender focal points receive specific training (minimum two days of training a year on gender equality and women’s empowerment programming).

13. Ongoing mandatory training for all levels of entity staff at HQ, regional and country offices

See above

14a. Knowledge on gender equality and women’s empowerment is systematically documented and publicly shared; and 14b. Communication plan includes gender equality and women’s empowerment as an integral component of internal and public information dissemination

No overlap

15. Participates systematically in inter-agency coordination mechanisms on gender equality and the empowerment of women

No overlap

26

Annex 5: Draft Performance indicators for humanitarian, emergency, recovery and post-conflict situations

9. FOR HUMANITARIAN, TRANSITION, EARLY RECOVERY & POST-CONFLICT SITUATIONS

9.a Humanitarian action: analysis and planning Sources: IASC (2008); OCHA (2011)

Exceeds minimum standards

• The Common Humanitarian Action Plan (CHAP) and/or Consolidated Appeal includes a detailed analysis of the differential impact of disasters and/or conflicts on women, men, boys and girls, and an analysis of their livelihood strategies and how to support these.

• The CHAP and/or CAP strategy, strategic objectives and indicators reflect the needs analysis vis-à-vis gender equality.

• The CAP uses the IASC gender marker.

• All data in the CHAP and/or Consolidated Appeal is sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

• Cluster response plans and project sheets identify and respond to the direct needs of women, girls, boys and men.

• Joint needs assessment include analysis of the differential impact of disasters and/or conflicts on women, men, boys and girls, and an analysis of their livelihood strategies and how to support these.

• Joint humanitarian programmes include marginalized women in planning processes, implementation and monitoring and evaluation.

• The CHAP and/or Consolidated Appeal include reference to the IASC policy statement on Gender Equality Programming in Humanitarian Action.

• The CHAP and/or Consolidated Appeal includes reference to Security Council resolution 1325 on women, peace and security.

Meets minimum standard

• The CHAP and/or Consolidated Appeal includes an analysis of the differential impact of disasters and/or conflicts on women, men, boys and girls, and an analysis of their livelihood strategies and how to support these.

• The CHAP and/or CAP strategy, strategic objectives and indicators reflect the needs analysis vis-à-vis gender equality.

• The CAP uses the IASC Gender Marker.

• All data in the CHAP and/or Consolidated Appeal is sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

• Cluster response plans and project sheets identify and respond to the direct needs of women, girls, boys and men.

• Joint needs assessment include analysis of the differential impact of disasters and/or conflicts on

27

9. FOR HUMANITARIAN, TRANSITION, EARLY RECOVERY & POST-CONFLICT SITUATIONS

women, men, boys and girls, and an analysis of their livelihood strategies and how to support these.

• Joint humanitarian programmes include marginalized women in planning processes and implementation.

Needs improvement Five or six of the above (under Meets Minimum Standards) are met. Inadequate Less than five of the above (under Meets Minimum Standards) are met. Missing Not applicable

9.b Humanitarian action: implementation, monitoring and evaluation Sources: IASC (2008); OCHA (2011)

Exceeds minimum standards

• There is good intra- and inter-cluster sector coordination and information management on gender equality issues.

• Consultation with local women’s machinery, organizations and women is promoted in the definition of priorities for joint humanitarian assistance and the design, delivery and monitoring of joint assistance programmes.

• Sufficient funding is allocated for activities to enhance capacity for integrating gender equality into policies and programmes

• Sufficient funding is targeted to programmes to address gender inequalities such as activities to empower women and girls.

• Monitoring and evaluation of joint humanitarian programmes includes gender analysis.

• All monitoring data is sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

Meets minimum standards

• There is adequate intra- and inter-cluster sector coordination and information management on gender equality issues.

• Sufficient funding is allocated for activities to enhance capacity for integrating gender equality into policies and programmes.

• Sufficient funding is targeted to programmes to address gender inequalities such as activities to empower women and girls.

• Monitoring and evaluation of joint humanitarian programmes includes gender analysis.

• All monitoring data is sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

28

9. FOR HUMANITARIAN, TRANSITION, EARLY RECOVERY & POST-CONFLICT SITUATIONS

Needs improvement Three or four of the above (under Meets Minimum Standards) are met. Inadequate Less than three of the above (under Meets Minimum Standards) are met. Missing Not applicable

9.c Transition and post-conflict: analysis and planning UNDG (2007); UNDG and World Bank (2007).

Exceeds minimum standards

• Analysis of structural and proximate causes of conflict in the UN Transitional Strategy includes a detailed analysis of gender equality and causes which have impacted women, men, boys and girls differently.

• The UN Transitional Strategy disaggregates target groups by sex with a rationale for the relative attention to women, men, boys and girls.

• Post Conflict Needs Assessments and Transitional Results Frameworks delineate the differing needs and livelihood strategies of women, men, boys and girls, and support the design of gender-proactive programming.

• All data in the Post Conflict Needs Assessments and Transitional Results Frameworks is sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

• More than 50 per cent of the results statements in the Transitional Results Frameworks are gender-sensitive, and linked to specific budgets.

Meets minimum standards

• Analysis of structural and proximate causes of conflict in the UN Transitional Strategy includes an analysis of gender equality and causes which have impacted women, men, boys and girls differently.

• The UN Transitional Strategy disaggregates target groups by sex with a rationale for the relative attention to women, men, boys and girls.

• Post Conflict Needs Assessments and Transitional Results Frameworks delineate the differing needs of women, men, boys and girls, and support the design of gender-proactive programming.

• All data in the Post Conflict Needs Assessments and Transitional Results Frameworks is sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

29

9. FOR HUMANITARIAN, TRANSITION, EARLY RECOVERY & POST-CONFLICT SITUATIONS

• Between 30 and 50 per cent of results statements in the Transitional Results Frameworks are gender-sensitive, and linked to budgets.

Needs improvement Three or four of the above (under Meets Minimum Standards) are met. Where there is no UN Transitional Strategy or equivalent, at least two of the other standards should be met. Where there is no Post Conflict Needs Assessment, at least two of the other standards should be met. Inadequate Less than three of the above (under Meets Minimum Standards) are met. Where there is no UN Transitional Strategy of equivalent, one of the above is met. Where there is no Post Conflict Needs Assessment, one of the above is met. Missing Not applicable

9.d Early recovery CWGER (2008)

Exceeds minimum standards

• Support to policy development includes adequate attention to gender equality.

• Capacity development for national authorities and UN agencies for early recovery includes capacity development and targets for improved capacity in gender mainstreaming.

• Early recovery programming builds the capacity of women and women’s organizations to ensure their active and equal participation in all aspects and sectors of early recovery and longer-term recovery and development.

• Joint early recovery needs assessments, strategies and plans include an analysis of the differential impact of disasters and/or conflicts on women, men, boys and girls, and an analysis of their livelihood strategies and how to support these.

• Joint early recovery programmes include marginalized women in planning processes, implementation, and monitoring and evaluation.

• Joint early recovery programmes include adequate attention to gender equality.

• Baseline and other early recovery data should be sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

Meets minimum standards

• Capacity development for national authorities and UN agencies for early recovery includes capacity development in gender mainstreaming.

30

9. FOR HUMANITARIAN, TRANSITION, EARLY RECOVERY & POST-CONFLICT SITUATIONS

• Early recovery programming builds the capacity of women and women’s organizations to ensure their active and equal participation in all aspects and sectors of early recovery and longer-term recovery and development.

• Joint early recovery needs assessments, strategies and plans include an analysis of the differential impact of disasters and/or conflicts on women, men, boys and girls, and an analysis of their livelihood strategies and how to support these.

• Joint early recovery programmes include marginalized women in planning processes and implementation.

• Joint early recovery programmes include adequate attention to gender equality.

• Baseline and other early recovery data should be sex-disaggregated, or there is a specific reason noted for not disaggregating by sex.

Needs improvement Three to five of the above (under Meets minimum standards) are met Inadequate Less than three of the above (under Meets minimum standards) are met Missing Not applicable

Sources: CWGER (Cluster Working Group on Early Recovery) (2008) Guidance Note on Early Recovery. UNDP: Geneva. OCHA (2011) Consolidated Appeal 2012 Guidelines. Geneva: OCHA. IASC (2008) Gender Equality Programming in Humanitarian Action. IASC Policy Statement. June. UNDG (2007) UN Transitional Strategy Guidance Note. New York: UNDG. UNDG and World Bank (2007) Joint Guidance Note on Integrated Recovery Planning using Post Conflict Needs Assessments and Transitional Results Frameworks, draft.

31

10. FOR INTEGRATED MISSIONS11

10. Integrated Mission strategic planning, coordination and monitoring Sources: DPKO (2010)

Exceeds minimum standards Integrated Strategic Framework or equivalent

• Situation analysis includes the main gender equality issues in relation to conflict (e.g. SGBV, HIV/AIDS, land holding, displacement, access to natural resources).

• All data in the Integrated Strategic Framework is sex-disaggregated, or there is a specific reason given for not disaggregating by sex.

• Gender equality is a central area of discussion in the Integrated Strategic Framework retreat.

• Strategic objectives integrate the main gender equality goals of the UN.

• Results statements integrate the main gender equality objectives of the UN.

Coordination

• There is a separate thematic group on gender equality, and other thematic groups pay adequate attention to gender.

• Women’s machinery consulted on ISF. Monitoring

• A dedicated report is produced on gender equality.

• Monitoring processes include attention to the main gender equality strategic objectives and results statements.

Meets minimum standard Integrated Strategic Framework or equivalent

• Situation analysis includes the main gender equality issues in relation to conflict (e.g. SGBV, HIV/AIDS, land holding, displacement, access to natural resources).

• All data in the Integrated Strategic Framework is sex-disaggregated, or there is a specific reason given for not disaggregating by sex.

• Strategic objectives integrate the main gender equality goals of the UN.

• Results statements integrate the main gender equality objectives of the UN.

Coordination

• Either there is a separate thematic group on gender equality, or other thematic groups pay adequate attention to gender.

• Women’s machinery consulted on ISF. Monitoring

11 Integrated missions refer to peacekeeping or special political missions that take place alongside a UNCT presence.

32

10. FOR INTEGRATED MISSIONS11

• Monitoring processes include attention to the main gender equality strategic objectives and results statements.

Needs improvement Four to six of the items above (under Meets Minimum Standards) are met Inadequate Less than three of the items above (under Meets Minimum Standards) are met Missing Not applicable

Sources: IMPP Guidelines: Role of the Field. Integrated Planning for UN Field Presences. DPKO, January 2010.