Impact Assessment

Institutions and Productive Employment Programs in Latin America and the Carribean. Methodological Approach and Preliminary Results, IDB, 2011

    In the post-Washington Consensus era, there has been renewed discussion of the role of the State in encouraging productivity. This has led to a debate about the role of institutions and programs that form the basis for state interventions to enhance productivity. These interventions, in many instances, take the form of business development services, support for innovation, export promotion, and other programs that target specific sectors and activities. In general, these interventions are called Productive Development Programs (PDPs). Today, governments in Latin America and the Caribbean devote a great deal of resources to PDPs, supporting firms through official agencies and private intermediaries. Despite their economic justification, there is still no well-defined methodology for assessing their performance or for categorizing PDPs according to their type, size, target market, and delivery mechanisms.

    This document by Martin Chrisney and Marco Kamiya aims to contribute to the development of a tool to analyze PDPs. The methodology for mapping and measuring of institutional performance (MIDI, from its acronym in Spanish) constitutes an effort to quantify and measure the organizational aspects that contribute to a better balance between the costs and benefits of PDPs. The objective of the MIDI is to measure the quality of the programs and the institutions that are behind them and assess their ability to achieve their stated goals. It is in this vein that the MIDI establishes metrics to analyze how these organizational arrangements are planned, implemented, and monitored.

    The report is divided into the following sections. Section 1 gives a general introduction to what PDPs offer and, in addition, describes the theoretical foundations for their different dimensions and implications for establishing metrics. Section 2 describes the MIDI and alternative analytical methods. Section 3 shows the criteria and subcriteria used for evaluating institutions and programs. Section 4 outlines the aggregate results of the pilot phase. Section 5 summarizes the lessons learned and the new tasks emerging from this work, and provides some conclusions.