1-1 university of hail college of computer science and engineering course: ics313 fundamentals of...

40
1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email: [email protected]

Upload: jason-cain

Post on 17-Jan-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

1-1

University of HailCollege of Computer Science and Engineering

Course: ICS313 Fundamentals of Programming Languages.

Instructor: Abdul Wahid WaliEmail: [email protected]

Page 2: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Contents:• Introduction• Lexical Analysis• The Parsing Problem• Recursive-Descent Parsing• Bottom-Up Parsing

1-2

Chapter 4 Lexical and Syntax Analysis

Page 3: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Introduction• Language implementation systems must analyze source code,

regardless of the specific implementation approach

• Nearly all syntax analysis is based on a formal description of the

syntax of the source language (BNF)

1-3

Page 4: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Advantages of Using BNF to Describe Syntax

• Provides a clear and concise syntax description

• The parser can be based directly on the BNF

• Parsers based on BNF are easy to maintain

1-4

Page 5: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Syntax Analysis

The syntax analysis portion of a language processor

nearly always consists of two parts:A low-level part called a lexical analyzer (mathematically,

a finite automaton based on a regular grammar)

A high-level part called a syntax analyzer, or parser

(mathematically, a push-down automaton based on a

context-free grammar, or BNF)

1-5

Page 6: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Reasons to Separate Lexical and Syntax Analysis

• Simplicity - less complex approaches can be used for

lexical analysis; separating them simplifies the parser

• Efficiency - separation allows optimization of the

lexical analyzer

• Portability - parts of the lexical analyzer may not be

portable, but the parser always is portable

1-6

Page 7: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

• Lexical Analysis and Compiling process

7

Program

Parse

Token Stream

Machine code

Parse

Lexical analysis

Syntactic analysis

Code generation

Semantic analysis

Optimization

Lexical Analysis

Page 8: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

8

Lexical analysis is the process of converting a

sequence of characters into a sequence of tokens.

A program or function which performs lexical

analysis is called a lexical analyzer, lexer or scanner.

A lexer often exists as a single function which is

called by a parser or another function.

Page 9: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (Detailed description)

A lexical analyzer is a pattern matcher for character strings

A lexical analyzer is a “front-end” for the parser

Identifies substrings of the source program that belong together -

lexemes

Lexemes match a character pattern, which is associated

with a lexical category called a tokenSum = oldsum – value / 100;Lexeme sum, = , oldsum, -, value, / , 100, ;

Token IDENTIFIER(IDENT), ASSIGN_OP, IDENT, SUBTRACT_OP, IDENT, DIVISION_OP, INT_LIT, SEMICOLON

sum is a lexeme; its token may be IDENT1-9

Page 10: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (continued)

The lexical analyzer is usually a function that is called by the

parser when it needs the next token

Three approaches to build a lexical analyzer:

1- Write a formal description of the tokens and use a

software tool that constructs table-driven lexical

analyzers given such a description.

1-10

Page 11: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

State Diagram Design– A naive state diagram would have a transition

from every state on every character in the source

language - such a diagram would be very large!

1-11

2- Design a state diagram that describes the tokens and write a

program that implements the state diagram

3- Design a state diagram that describes the tokens and hand-

construct a table-driven implementation of the state diagram

We only discuss approach 2

Page 12: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (contd.)

In many cases, transitions can be combined to simplify the

state diagram

When recognizing an identifier, all uppercase

and lowercase letters are equivalent

Use a character class that includes all letters

When recognizing an integer literal, all digits

are equivalent - use a digit class1-12

Page 13: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (contd.)

• Reserved words and identifiers can be recognized

together (rather than having a part of the diagram

for each reserved word)

– Use a table lookup to determine whether a

possible identifier is in fact a reserved word

1-13

Page 14: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (contd.)

Convenient utility subprograms:

getChar - gets the next character of input, puts it in

nextChar, determines its class and puts the class in

charClass

addChar - puts the character from nextChar into the place

the lexeme is being accumulated, lexeme

lookup - determines whether the string in lexeme is a

reserved word (returns a code)

1-14

Page 15: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

State Diagram

1-15

A state diagram to recognize names, reserved words, and integer literals

Page 16: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (continued)

Implementation (assume initialization):int lex() {

switch (charClass) { case LETTER: /* Parse identifiers and reserved words */

addChar(); getChar(); while ( charClass == LETTER ||

charClass == DIGIT) { addChar(); getChar();

} return lookup(lexeme); break;

1-16

Page 17: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Lexical Analysis (continued)

case DIGIT: /* Parse integer literals */

addChar(); getChar(); while ( charClass == DIGIT ) { addChar(); getChar(); } return INT_LIT; break; } /* End of switch */

} /* End of function lex */

1-17

Page 18: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The Parsing Problem

• Parsing, or, more formally, syntactic analysis, is the process of

analyzing a text, made of a sequence of tokens to determine

its grammatical structure with respect to a given formal

grammar.

• Goals of the parser, given an input program:

– Find all syntax errors; for each errror, produce an

appropriate diagnostic message and recover quickly

– Produce the parse tree, or at least a trace of the parse tree,

for the program

1-18

Page 19: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The Parsing Problem (continued)

• The task of the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways:

• Two categories of parsers1- Top down - produce the parse tree, beginning at the root• Order is that of a leftmost derivation• Traces or builds the parse tree in preorder

2- Bottom up - produce the parse tree, beginning at the leaves• Order is that of the reverse of a rightmost derivation

• Useful parsers look only one token ahead in the input

1-19

Page 20: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The Parsing Problem (contd.)

Top-down ParsersTop-down parsing can be viewed as an attempt to find left-most

derivations of an input-stream by searching for parse trees using a top-

down expansion of the given formal grammar rules.

In other words, given a sentential form, xA , the parser must choose the

correct A-rule to get the next sentential form in the leftmost derivation,

using only the first token produced by A

The most common top-down parsing algorithms:Recursive descent - a coded implementation

LL parsers (Left-to-right, Leftmost derivation) - table driven implementation

1-20

Page 21: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The Parsing Problem (contd.)Bottom-up parsers

A bottom-up parser can start with the input and attempt to rewrite it

to the start symbol. Intuitively, the parser attempts to locate the most

basic elements, then the elements containing these, and so on.

Another term used for this type of parser is Shift-Reduce parsing.

In other words, given a right sentential form, , determine what substring

of is the right-hand side of the rule in the grammar that must be reduced

to produce the previous sentential form in the right derivation

The most common bottom-up parsing algorithms are in the LR (Left-to-right,

Rightmost derivation) family

1-21

Page 22: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The Parsing Problem (contd.)

The Complexity of Parsing

Parsers that work for any unambiguous grammar

are complex and inefficient ( O(n3), where n is the

length of the input )

Compilers use parsers that only work for a subset

of all unambiguous grammars, but do it in linear

time ( O(n), where n is the length of the input )

1-22

Page 23: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Recursive-Descent Parsing

• There is a subprogram for each non-terminal in the grammar, which

can parse sentences that can be generated by that non-terminal

• EBNF is ideally suited for being the basis for a recursive-descent parser,

because EBNF minimizes the number of non-terminals

1-23

A grammar for simple expressions:

<expr> <term> {(+ | -) <term>}<term> <factor> {(* | /) <factor>}<factor> id | ( <expr> )

Page 24: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Recursive-Descent Parsing (contd.)

Assume we have a lexical analyzer named lex, which puts the

next token code in nextToken

The coding process when there is only one RHS:

For each terminal symbol in the RHS, compare it with the next input

token; if they match, continue, else there is an error

For each non-terminal symbol in the RHS, call its associated parsing

subprogram

1-24

Page 25: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Recursive-Descent Parsing (contd.)/* Function expr Parses strings in the language generated by the rule: <expr> → <term> {(+ | -) <term>} */

void expr() {

/* Parse the first term */ term(); /* non-terminal is function call */

/* As long as the next token is + or -, call lex to get the next token, and

parse the next term */

while (nextToken == PLUS_CODE || nextToken == MINUS_CODE){

lex();

term(); }

}1-25

Page 26: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Recursive-Descent Parsing (cont.)

This particular routine does not detect errors

Convention: Every parsing routine leaves the next

token in nextToken

A non-terminal that has more than one RHS requires

an initial process to determine which RHS it is to parseThe correct RHS is chosen on the basis of the next token of

input (the lookahead)

The next token is compared with the first token that can be

generated by each RHS until a match is found

If no match is found, it is a syntax error

1-26

Page 27: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Recursive-Descent Parsing (contd.)/* Function factor Parses strings in the language generated by the rule: <factor> -> id | (<expr>) */ void factor() { if (nextToken) == ID_CODE) /* Determine which RHS */

lex(); /* For the RHS id, just call lex */

/* If the RHS is (<expr>) – call lex to pass over the left parenthesis, call expr, and check for the right parenthesis */

else if (nextToken == LEFT_PAREN_CODE) { lex(); expr(); /* non-terminal is function call */

if (nextToken == RIGHT_PAREN_CODE) lex(); else error(); } /* End of else if (nextToken == ... */

else error(); /* Neither RHS matches */ }

1-27

Page 28: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Top Down Parsing (Contd.)

The LL Parser

An LL parser is a top-down parser that parses the input from Left to right, and constructs a Leftmost derivation of the sentence. The class of grammars which are parsable in this way is known as the LL grammars.

An LL parser is called an LL(k) parser if it uses k tokens of lookahead when parsing a sentence. If such a parser exists for a certain grammar and it can parse sentences of this grammar without backtracking then it is called an LL(k) grammar.

LL(1) grammars are very popular because the corresponding LL parsers only need to look at the next token to make their parsing decisions

1-28

Page 29: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The LL Parser (Contd.)• LL(1) Parser• The parser works on strings from a particular context-free grammar.• The parser consists of

– an input buffer, holding the input string (built from the grammar)– a stack on which to store the terminals and non-terminals from

the grammar yet to be parsed– a parsing table which tells it what (if any) grammar rule to apply

given the symbols on top of its stack and the next input token

• The parser applies the rule found in the table by matching the top-most symbol on the stack (row) with the current symbol in the input stream (column).

• When the parser starts, the stack already contains two symbols:

• where '$' is a special terminal to indicate the bottom of the stack and the end of the input stream, and 'S' is the start symbol of the grammar. The parser will attempt to rewrite the contents of this stack to what it sees on the input stream. However, it only keeps on the stack what still needs to be rewritten.

1-29

[ S, $ ]

Page 30: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The LL Parser (Contd.)Example:• To explain its workings we will consider the following

small grammar:1. S → F2. S → ( S + F )3. F → a

and parse the following input:( a + a )

• The parsing table for this grammar looks as follows

1-30

( a + ) $

S 2 1 - - -

F - 3 - - -

Page 31: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The LL Parser Example (Contd.)

• Parsing procedure• In each step, the parser reads the next-available symbol from the input

stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack.

• Thus, in its first step, the parser reads the input symbol '(' and the stack-top symbol 'S'. The parsing table instruction comes from the column headed by the input symbol '(' and the row headed by the stack-top symbol 'S'; this cell contains '2', which instructs the parser to apply rule (2). The parser has to rewrite 'S' to '( S + F )' on the stack and write the rule number 2 to the output. The stack then becomes:

1-31

[ ( , S, +, F, ), $ ]

Page 32: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The LL Parser Example (Contd.)• Since the '(' from the input stream did not match the top-most symbol, 'S',

from the stack, it was not removed, and remains the next-available input symbol for the following step.

• In the second step, the parser removes the '(' from its input stream and from its stack, since they match. The stack now becomes:

• Now the parser has an 'a' on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes:

• The parser now has an 'a' on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes:

1-32

[ S, +, F, ), $ ]

[ F, +, F, ), $ ]

[ a, +, F, ), $ ]

Page 33: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The LL Parser Example (Contd.)• In the next two steps the parser reads the 'a' and '+' from the input

stream and, since they match the next two items on the stack, also removes them from the stack. This results in:

• In the next three steps the parser will replace 'F' on the stack by 'a', write the rule number 3 to the output stream and remove the 'a' and ')' from both the stack and the input stream. The parser thus ends with '$' on both its stack and its input stream.

• In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream:

[ 2, 1, 3, 3 ]• This is indeed a list of rules for a leftmost derivation of the input string,

which is:S → ( S + F ) → ( F + F ) → ( a + F ) → ( a + a )

-------------------------------------------------------------------------------------------------------

1-33

[ F, ), $ ]

Page 34: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

The LL Parser• Remarks• As can be seen from the example the parser performs three types of steps

depending on whether the top of the stack is a nonterminal, a terminal or the special symbol $:– If the top is a nonterminal then it looks up in the parsing table on the basis of

this nonterminal and the symbol on the input stream which rule of the grammar it should use to replace it with on the stack. The number of the rule is written to the output stream. If the parsing table indicates that there is no such rule then it reports an error and stops.

– If the top is a terminal then it compares it to the symbol on the input stream and if they are equal they are both removed. If they are not equal the parser reports an error and stops.

– If the top is $ and on the input stream there is also a $ then the parser reports that it has successfully parsed the input, otherwise it reports an error. In both cases the parser will stop.

• These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a leftmost derivation to the output stream or it will have reported an error.

34

Page 35: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Bottom-up Parsing• A bottom-up parser starts with the input and attempt to rewrite it to

the start symbol. The name comes from the concept of a parse tree, in which the most fundamental units are at the bottom, and structures composed of them are in successively higher layers, until at the top of the tree a single unit, or start symbol, comprises all of the information being analyzed.

• In other words, bottom-up parsing is a parsing method that works by identifying terminal symbols first, and combines them successively to produce nonterminals.

• The most common bottom-up parsers are the shift-reduce parsers.

1-35

Page 36: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Shift-Reduce ParsersThese parsers examine the input tokens and either shift (push) them onto a stack or reduce elements at the top of the stack, replacing a right-hand side by a left-hand side.

• Action tableOften an action (or parse) table is constructed which helps the parser determine what to do next. The following is a description of what can be held in an action table.Actions

• Shift - push token onto stack• Reduce - remove handle from stack and push on corresponding nonterminal• Accept - recognize sentence when stack contains only the distinguished

symbol and input is empty• Error - happens when none of the above is possible; means original input was

not a sentence

1-36

Page 37: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Algorithm: Shift-reduce parsing• 1. Start with the sentence to be parsed as the initial sentential form• 2. Until the sentential form is the start symbol do:

2.1. Scan through the input until we recognise something that corresponds to the RHS of one of the production rules (this is called a handle)2.2. Apply a production rule in reverse; i.e., replace the RHS of the rule which appears in the sentential form with the LHS of the rule (an action known as a reduction)

• A shift-reduce parser is most commonly implemented using a stack, where we proceed as follows:

• start with an empty stack• a "shift" action corresponds to pushing the current input symbol onto the

stack• a "reduce" action occurs when we have a handle on top of the stack. To

perform the reduction, we pop the handle off the stack and replace it with the nonterminal on the LHS of the corresponding rule.

1-37

Page 38: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Bottom-up Parsing Example Let us take the following grammar:(1). Sentence --> NounPhrase VerbPhrase (2). NounPhrase --> Article Noun(3). VerbPhrase --> Verb | Adverb Verb(4). Article --> the | a | ...(5). Verb --> jumps | sings | ... (6). Noun --> dog | cat | ...

And the input sequence:the dog jumps

Let us apply the Shift – Reduce Parsing algorithm to this input sequence and see how the Shift-Reduce Parsers work.

1-38

Page 39: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Bottom-up Parsing Example Contd.‘the dog jumps’ (input Sequence)

Then the bottom up parsing using Shift-Reduce Algorithm is:

Stack Input Sequence () (the dog jumps) (the) (dog jumps) SHIFT word onto stack (Art) (dog jumps) REDUCE using grammar rule (4) (Art dog) (jumps) SHIFT.. (Art Noun) (jumps) REDUCE using grammar rule (6)(NounPhrase) (jumps) REDUCE using grammar rule (2)(NounPhrase jumps) () SHIFT(NounPhrase Verb) () REDUCE using grammar rule (5)(NounPhrase VerbPhrase) () REDUCE using grammar rule (3) (Sentence) () SUCCESS using grammar rule (1)

1-39

Page 40: 1-1 University of Hail College of Computer Science and Engineering Course: ICS313 Fundamentals of Programming Languages. Instructor: Abdul Wahid Wali Email:

Summary

Syntax analysis is a common part of language implementation

A lexical analyzer is a pattern matcher that isolates small-scale parts of

a program

Detects syntax errors

Produces a parse tree

A recursive-descent parser is an LL parser

Parsing problem for bottom-up parsers: find the substring of current

sentential form

The shift-reduce parsers is the most common bottom-up parsing

approach1-40