econometrics for consumption and demandamoneta/lecture_mar08.pdf · introduction • individual...
TRANSCRIPT
Econometrics for Consumption and Demand
Alessio Moneta
Doctoral Training Course "Evolutionary Economics"Max Planck Institute of Economics
Jena
6 March 2008
MAX-PLANCK-GESELLSCHAFT
Introduction
• Individual Demand Analysis
• Demand system• Based on Utility Theory
• Cross-section Demand Analysis• Family budgets• Budget allocations
Introduction
• Individual Demand Analysis
• Demand system• Based on Utility Theory
• Cross-section Demand Analysis• Family budgets• Budget allocations
Introduction
• Individual Demand Analysis
• Demand system• Based on Utility Theory
• Cross-section Demand Analysis• Family budgets• Budget allocations
Introduction
• Individual Demand Analysis
• Demand system• Based on Utility Theory
• Cross-section Demand Analysis• Family budgets• Budget allocations
Introduction
• Individual Demand Analysis
• Demand system• Based on Utility Theory
• Cross-section Demand Analysis• Family budgets• Budget allocations
Introduction
• Individual Demand Analysis
• Demand system• Based on Utility Theory
• Cross-section Demand Analysis• Family budgets• Budget allocations
Nonparametric Methods for EstimatingEngel Curves
Outline
• What is an Engel Curve?
• What does Nonparametric Regression mean?
• NP density estimation. Income Distribution.
• NP smoothing
Nonparametric Methods for EstimatingEngel Curves
Outline
• What is an Engel Curve?
• What does Nonparametric Regression mean?
• NP density estimation. Income Distribution.
• NP smoothing
Nonparametric Methods for EstimatingEngel Curves
Outline
• What is an Engel Curve?
• What does Nonparametric Regression mean?
• NP density estimation. Income Distribution.
• NP smoothing
Nonparametric Methods for EstimatingEngel Curves
Outline
• What is an Engel Curve?
• What does Nonparametric Regression mean?
• NP density estimation. Income Distribution.
• NP smoothing
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (cross-sectional)ECs describe how expenditure for commodity g depends onincome.• expg = f (y ,z)
• expg = expenditure on g (e.g. food). Problem ofaggregation across g’s.
• y : income. “Total consumption" proxy.• z: vector of characteristics: e.g. age/household.
compositions.• Prices? Implications of the fixed-prices assumption.
• Budget shares ECs: wg = f (y ,z), wherewg = (expg/y)∗100
• Quantity EC: qg = f (y ,z)
• Price EC: pg = f (y ,z) (cfr. Bils and Klenow AER 2001).
Engel curves (individual)
• qg,i = f (yi ,zi)
• Link with Demand Analysis.• The problem of aggregation (see Stoker 1993):
• Theory: focus on individual maximizing behavior.• Available data: aggregate.
• Individual ECs are not observable.
Engel curves (individual)
• qg,i = f (yi ,zi)
• Link with Demand Analysis.• The problem of aggregation (see Stoker 1993):
• Theory: focus on individual maximizing behavior.• Available data: aggregate.
• Individual ECs are not observable.
Engel curves (individual)
• qg,i = f (yi ,zi)
• Link with Demand Analysis.• The problem of aggregation (see Stoker 1993):
• Theory: focus on individual maximizing behavior.• Available data: aggregate.
• Individual ECs are not observable.
Engel curves (individual)
• qg,i = f (yi ,zi)
• Link with Demand Analysis.• The problem of aggregation (see Stoker 1993):
• Theory: focus on individual maximizing behavior.• Available data: aggregate.
• Individual ECs are not observable.
Engel curves (individual)
• qg,i = f (yi ,zi)
• Link with Demand Analysis.• The problem of aggregation (see Stoker 1993):
• Theory: focus on individual maximizing behavior.• Available data: aggregate.
• Individual ECs are not observable.
Engel curves (individual)
• qg,i = f (yi ,zi)
• Link with Demand Analysis.• The problem of aggregation (see Stoker 1993):
• Theory: focus on individual maximizing behavior.• Available data: aggregate.
• Individual ECs are not observable.
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Historical perspective
• Engel (1857): no functional for pre-specified. Engel’s law:food budget shares decrease with income.
• Allen and Bowley (1935): qg = a+by (fitted by OLS).
• Working (1943): wg = a+b log(y)
• wg = a+b log(y)+cy−1
• large errors
• heterogeneity of tastes
• non-linearities
• nonparametric approaches to EC estimation (Härdle andJerison 1991, Banks et al. RES 1997)
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Regression Function
• Estimation of the functional dependence of Y (e.g.expenditure on food) on X (e.g. income).
• Y = m(X )+ ε, where E(ε|x) = 0
• E(Y |X = x) = E(m(x)|x)+E(ε|x) = m(x)
• Linear regression model: m(x) = α +βx
• Log-linear: m(x) = α +β logx .
• OLS: BLUE estimator
• Non parametric approach:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy
Example (UK FES data 2001)
0 200 400 600 800 1000 1200
2040
6080
100
Total Expenditure
Expe
nditu
re o
n To
tal F
ood
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Engel Curve 2001 for Total Food
0.0 0.2 0.4 0.6 0.8 1.0
05
1015
20
Total Expenditure
Expe
nditu
re o
n Ce
real
s
Engel Curve 2001 for Cereals
0.0 0.2 0.4 0.6 0.8 1.0
02
46
810
12
Total Expenditure
Expe
nditu
re o
n Su
gar e
tc.
Engel Curve 2001 for Sugar etc.
0.0 0.2 0.4 0.6 0.8 1.0
020
4060
8010
012
0
Total Expenditure
Expe
nditu
re o
n TV
, vid
eo a
nd a
udio
equ
ip.
Engel Curve 2001 for TV, video and audio equip.
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
Some definitions
• Conditional mean:
E(Y |X = x) =∫
∞
−∞
yfY |X (y |x)dy =∫
∞
−∞
yfX ,Y (x ,y)
fX (x)dy
• We would like to estimate fX (x) and fXY (x ,y).• Recall that:
• Distribution function: FX ,Y (x ,y) = Pr(X ≤ x ,Y ≤ y)• The joint pdf fX ,Y (x ,y) is defined as:
FX ,Y (x ,y) =∫ x
−∞
∫ y
−∞
fX ,Y (z,w)dzdw
• Marginal pdf: fX (x) =∫
∞
−∞fX ,Y (x ,y)dy
• Conditional df: fY |X (y |x) = fX ,Y (x ,y)/fX (x)
How to estimate fX (x) and fX ,Y (X ,Y )?
• Parametric approach: ML Estimator
• Simple nonparametric approach. Histogram.
f̂ (x) =1
Nb
N
∑i=1
I(Xi ∈ bin(x)),
where b in the length of each of the bins according towhich the X -axis is divided.
How to estimate fX (x) and fX ,Y (X ,Y )?
• Parametric approach: ML Estimator
• Simple nonparametric approach. Histogram.
f̂ (x) =1
Nb
N
∑i=1
I(Xi ∈ bin(x)),
where b in the length of each of the bins according towhich the X -axis is divided.
Histogram and Smooth Estimations
Total Consumption (UK 2001)
Dens
ity
0 200 400 600 800 1000 1200 1400
0.00
000.
0005
0.00
100.
0015
0.00
20
KernelML Normal
A slightly more complicated histogram:
f̂ (x) =1
Nb
N
∑i=1
I(x− 12
b ≤ Xi ≤ x +12
b),
f̂ (x) =1
Nb
N
∑i=1
I(−1
2≤ Xi −x
b≤ 1
2
),
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)where
K (u)
{1 if |u| ≤ 1/20 if |u| ≥ 1/2.
• b: bandwidth parameter; K : kernel (weighting) function
A slightly more complicated histogram:
f̂ (x) =1
Nb
N
∑i=1
I(x− 12
b ≤ Xi ≤ x +12
b),
f̂ (x) =1
Nb
N
∑i=1
I(−1
2≤ Xi −x
b≤ 1
2
),
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)where
K (u)
{1 if |u| ≤ 1/20 if |u| ≥ 1/2.
• b: bandwidth parameter; K : kernel (weighting) function
A slightly more complicated histogram:
f̂ (x) =1
Nb
N
∑i=1
I(x− 12
b ≤ Xi ≤ x +12
b),
f̂ (x) =1
Nb
N
∑i=1
I(−1
2≤ Xi −x
b≤ 1
2
),
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)where
K (u)
{1 if |u| ≤ 1/20 if |u| ≥ 1/2.
• b: bandwidth parameter; K : kernel (weighting) function
A slightly more complicated histogram:
f̂ (x) =1
Nb
N
∑i=1
I(x− 12
b ≤ Xi ≤ x +12
b),
f̂ (x) =1
Nb
N
∑i=1
I(−1
2≤ Xi −x
b≤ 1
2
),
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)where
K (u)
{1 if |u| ≤ 1/20 if |u| ≥ 1/2.
• b: bandwidth parameter; K : kernel (weighting) function
Histogram and naive Kernel
Total Consumption (UK 2001)
Dens
ity
0 200 400 600 800 1000 1200 1400
0.00
000.
0005
0.00
100.
0015
0.00
200.
0025
b=100b=20
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Kernel Density Estimator:
f̂ (x) =1
Nb
N
∑i=1
K(
Xi −xb
)• K satisfies:
• K (u) = K (−u)•
∫K (u)du = 1
• K (u)≥ 0 everywhere• b −→ 0 as N −→ ∞
• Nb −→ ∞ as N −→ ∞
• Triangular kernel: (1−|u|)I(|u| ≤ 1).
• Epanechnikov kernel: 34(1−u2)I(|u| ≤ 1).
• Gaussian kernel: (2π)−1/2 exp(−1
2u2) .
Comparison of Kernel Estimators
Total Consumption (UK 2001)
Dens
ity
0 200 400 600 800 1000 1200 1400
0.00
000.
0005
0.00
100.
0015
0.00
200.
0025
Epanechnikov b= 100Epanechnikov b= 20Triangular b=100Gaussian b=100
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Bias and Variance of the Kernel Density Estimator:
• Mean Square Error: MSE(θ̂) = E(θ̂ −θ0)2
• MSE(θ̂) = [Bias(θ̂)]2 +Var(θ̂)
Bias(f̂ (x))≈ b2
2f (2)(x)
∫K (u)u2du
Var(f̂ (x))≈ f (x)Nb
∫K (u)2du
• Trade-off between Bias and Variance:• ↑ b,↑ Bias,↓ Var• ↓ b,↓ Bias,↑ Var
Consistency and Asymptotic Normality:
• MS Consistency: limN→∞ E( ˆf (x)− f (x))2 = 0.
• Asymptotic distribution:
√Nb( ˆf (x)− f (x))→ N
(0, f (x)
∫K (u2)du
)• For d-dimensional densities the convergence rate is
√Nbd
(curse of dimensionality).
Consistency and Asymptotic Normality:
• MS Consistency: limN→∞ E( ˆf (x)− f (x))2 = 0.
• Asymptotic distribution:
√Nb( ˆf (x)− f (x))→ N
(0, f (x)
∫K (u2)du
)• For d-dimensional densities the convergence rate is
√Nbd
(curse of dimensionality).
Consistency and Asymptotic Normality:
• MS Consistency: limN→∞ E( ˆf (x)− f (x))2 = 0.
• Asymptotic distribution:
√Nb( ˆf (x)− f (x))→ N
(0, f (x)
∫K (u2)du
)• For d-dimensional densities the convergence rate is
√Nbd
(curse of dimensionality).
Choice of bandwidth:
• Bandwidth which minimizes MISE =∫
MSE( ˆf (x))dx .
• It turns out that dMISEdb = 0 for:
b∗ =( ∫
K (u)2du∫f (2)(x)dx [
∫K (u)u2du]2
)N−1/5
• Note that f (2)(x)2 is unknown!• Possible solutions:
• Use arbitrary b to estimate f (2)(x)2 and then plug in itsvalue in b∗.
• Silverman’s rule of thumb: estimate f (2)(x)2 using a normaldensity function:
bs = 1.06σN−1/5.
Choice of bandwidth:
• Bandwidth which minimizes MISE =∫
MSE( ˆf (x))dx .
• It turns out that dMISEdb = 0 for:
b∗ =( ∫
K (u)2du∫f (2)(x)dx [
∫K (u)u2du]2
)N−1/5
• Note that f (2)(x)2 is unknown!• Possible solutions:
• Use arbitrary b to estimate f (2)(x)2 and then plug in itsvalue in b∗.
• Silverman’s rule of thumb: estimate f (2)(x)2 using a normaldensity function:
bs = 1.06σN−1/5.
Choice of bandwidth:
• Bandwidth which minimizes MISE =∫
MSE( ˆf (x))dx .
• It turns out that dMISEdb = 0 for:
b∗ =( ∫
K (u)2du∫f (2)(x)dx [
∫K (u)u2du]2
)N−1/5
• Note that f (2)(x)2 is unknown!• Possible solutions:
• Use arbitrary b to estimate f (2)(x)2 and then plug in itsvalue in b∗.
• Silverman’s rule of thumb: estimate f (2)(x)2 using a normaldensity function:
bs = 1.06σN−1/5.
Choice of bandwidth:
• Bandwidth which minimizes MISE =∫
MSE( ˆf (x))dx .
• It turns out that dMISEdb = 0 for:
b∗ =( ∫
K (u)2du∫f (2)(x)dx [
∫K (u)u2du]2
)N−1/5
• Note that f (2)(x)2 is unknown!• Possible solutions:
• Use arbitrary b to estimate f (2)(x)2 and then plug in itsvalue in b∗.
• Silverman’s rule of thumb: estimate f (2)(x)2 using a normaldensity function:
bs = 1.06σN−1/5.
Choice of bandwidth:
• Bandwidth which minimizes MISE =∫
MSE( ˆf (x))dx .
• It turns out that dMISEdb = 0 for:
b∗ =( ∫
K (u)2du∫f (2)(x)dx [
∫K (u)u2du]2
)N−1/5
• Note that f (2)(x)2 is unknown!• Possible solutions:
• Use arbitrary b to estimate f (2)(x)2 and then plug in itsvalue in b∗.
• Silverman’s rule of thumb: estimate f (2)(x)2 using a normaldensity function:
bs = 1.06σN−1/5.
Choice of bandwidth:
• Bandwidth which minimizes MISE =∫
MSE( ˆf (x))dx .
• It turns out that dMISEdb = 0 for:
b∗ =( ∫
K (u)2du∫f (2)(x)dx [
∫K (u)u2du]2
)N−1/5
• Note that f (2)(x)2 is unknown!• Possible solutions:
• Use arbitrary b to estimate f (2)(x)2 and then plug in itsvalue in b∗.
• Silverman’s rule of thumb: estimate f (2)(x)2 using a normaldensity function:
bs = 1.06σN−1/5.
Cross-validation method:
• Recall:
f̂b(x) =1
Nb
N
∑i=1
K(
Xi −xb
)• Consider:
L(b) =N
∏j=1
f̂b(Xj) =N
∏j=1
1Nb
[K (0)+∑
i 6=jK
(Xi −Xj
b
)],
which is unfortunately maximized for b = 0.
• Cross validation bandwidth:
b̂CV = argmaxb
N
∏j=1
1(N−1)b
N
∑i 6=j
K(
Xi −Xj
b
)
Cross-validation method:
• Recall:
f̂b(x) =1
Nb
N
∑i=1
K(
Xi −xb
)• Consider:
L(b) =N
∏j=1
f̂b(Xj) =N
∏j=1
1Nb
[K (0)+∑
i 6=jK
(Xi −Xj
b
)],
which is unfortunately maximized for b = 0.
• Cross validation bandwidth:
b̂CV = argmaxb
N
∏j=1
1(N−1)b
N
∑i 6=j
K(
Xi −Xj
b
)
Cross-validation method:
• Recall:
f̂b(x) =1
Nb
N
∑i=1
K(
Xi −xb
)• Consider:
L(b) =N
∏j=1
f̂b(Xj) =N
∏j=1
1Nb
[K (0)+∑
i 6=jK
(Xi −Xj
b
)],
which is unfortunately maximized for b = 0.
• Cross validation bandwidth:
b̂CV = argmaxb
N
∏j=1
1(N−1)b
N
∑i 6=j
K(
Xi −Xj
b
)
Multivariate Kernel Density Estimations:
• Product Kernel estimation:
f̂X1,...,Xd (x1, . . . ,xd) =1
Nbd
N
∑i=1
Kd
(X1i −x1
b, . . . ,
Xdi −xd
b
)where Kd(u1, . . . ,ud) = K (u1), . . . ,K (ud).
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Nadaraya-Watson Estimator
• Recall:
E(Y |X = x) = m(x) =1
fX (x)
∫∞
−∞
yfX ,Y (x ,y)dy (1)
• After substituting k. estimators for f (x) and f (x ,y) in (1):
m̂NW (x) =∑
Ni YiK
(Xi−x
b
)∑
Ni K
(Xi−x
b
)• Note that m̂NW (x) is a weighted average: ∑
Ni Yiwi .
• Special case: rectangular K (moving average).• Analogies:
• histogram and regressogram;• moving histogram and moving average;• kernel density and kernel regression.
Other Kernel Smoothers:
• Suppose 0 < x1 < x2 < .. . < xn < 1
• Priestely and Chao:
m̂PC(x) =1b
N
∑i=1
(Xi −Xi−1)YiK(
Xi −xb
)• Gasser-Müller estimator:
m̂GM(x) =1b
N
∑i=1
Yi
∫ si
si−1
K(
Xi −ub
)du,
where s0 = 0, si = (Xi +Xi+1)/2, sn = 1.
Other Kernel Smoothers:
• Suppose 0 < x1 < x2 < .. . < xn < 1
• Priestely and Chao:
m̂PC(x) =1b
N
∑i=1
(Xi −Xi−1)YiK(
Xi −xb
)• Gasser-Müller estimator:
m̂GM(x) =1b
N
∑i=1
Yi
∫ si
si−1
K(
Xi −ub
)du,
where s0 = 0, si = (Xi +Xi+1)/2, sn = 1.
Other Kernel Smoothers:
• Suppose 0 < x1 < x2 < .. . < xn < 1
• Priestely and Chao:
m̂PC(x) =1b
N
∑i=1
(Xi −Xi−1)YiK(
Xi −xb
)• Gasser-Müller estimator:
m̂GM(x) =1b
N
∑i=1
Yi
∫ si
si−1
K(
Xi −ub
)du,
where s0 = 0, si = (Xi +Xi+1)/2, sn = 1.
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Summary and Conclusions:• Advantages of the nonparametric approach to estimate:
• Income distribution• Engel curves
• Nonparametric density estimation. Two choices:• Kernel function;• Bandwidth.
• Nonparametric regression. Three choices:• Kernel function• Bandwidth• Smoothing estimator (weighting system).
Links
• CRAN web page: http://cran.r-project.org
• TINN: http://www.sciviews.org/Tinn-R
• R codes used here:https://mail.sssup.it/∼amoneta/codelecture.txt
Links
• CRAN web page: http://cran.r-project.org
• TINN: http://www.sciviews.org/Tinn-R
• R codes used here:https://mail.sssup.it/∼amoneta/codelecture.txt
Links
• CRAN web page: http://cran.r-project.org
• TINN: http://www.sciviews.org/Tinn-R
• R codes used here:https://mail.sssup.it/∼amoneta/codelecture.txt
References:
Engel, J. and A. Kneip (1996), Recent Approaches toEstimating Engel Curves, Journal of Economics, 63(2),http://www.springerlink.com/content/v671xj3p6402mg74
Lewbel, A. (2006), Engel Curves, New Palgrave Dictionaryof Economics, 2nd Edition,http://www2.bc.edu/∼lewbel/palengel.pdf
Wand, M.P. and M.C. Jones (1995), Kernel Smoothing,Chapman & Hall.
Hart, J.D. (1997), Nonparametric Smoothing andLack-of-Fit Tests, Springer Verlag.
References:
Engel, J. and A. Kneip (1996), Recent Approaches toEstimating Engel Curves, Journal of Economics, 63(2),http://www.springerlink.com/content/v671xj3p6402mg74
Lewbel, A. (2006), Engel Curves, New Palgrave Dictionaryof Economics, 2nd Edition,http://www2.bc.edu/∼lewbel/palengel.pdf
Wand, M.P. and M.C. Jones (1995), Kernel Smoothing,Chapman & Hall.
Hart, J.D. (1997), Nonparametric Smoothing andLack-of-Fit Tests, Springer Verlag.
References:
Engel, J. and A. Kneip (1996), Recent Approaches toEstimating Engel Curves, Journal of Economics, 63(2),http://www.springerlink.com/content/v671xj3p6402mg74
Lewbel, A. (2006), Engel Curves, New Palgrave Dictionaryof Economics, 2nd Edition,http://www2.bc.edu/∼lewbel/palengel.pdf
Wand, M.P. and M.C. Jones (1995), Kernel Smoothing,Chapman & Hall.
Hart, J.D. (1997), Nonparametric Smoothing andLack-of-Fit Tests, Springer Verlag.
References:
Engel, J. and A. Kneip (1996), Recent Approaches toEstimating Engel Curves, Journal of Economics, 63(2),http://www.springerlink.com/content/v671xj3p6402mg74
Lewbel, A. (2006), Engel Curves, New Palgrave Dictionaryof Economics, 2nd Edition,http://www2.bc.edu/∼lewbel/palengel.pdf
Wand, M.P. and M.C. Jones (1995), Kernel Smoothing,Chapman & Hall.
Hart, J.D. (1997), Nonparametric Smoothing andLack-of-Fit Tests, Springer Verlag.