0% found this document useful (0 votes)
14 views2 pages

CD

The document explains the concept of a symbol table in compiler design, which stores information about identifiers used in a program and aids in efficient identifier management, error detection, type checking, scope handling, and optimization. It also discusses the role of finite automata in lexical analysis, differentiating between deterministic (DFA) and nondeterministic (NFA) finite automata, highlighting their operational differences and applications. Additionally, it lists various compiler construction tools and describes bottom-up parsing techniques, including shift-reduce, operator precedence, and LR parsing.

Uploaded by

omkarkoratkar28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views2 pages

CD

The document explains the concept of a symbol table in compiler design, which stores information about identifiers used in a program and aids in efficient identifier management, error detection, type checking, scope handling, and optimization. It also discusses the role of finite automata in lexical analysis, differentiating between deterministic (DFA) and nondeterministic (NFA) finite automata, highlighting their operational differences and applications. Additionally, it lists various compiler construction tools and describes bottom-up parsing techniques, including shift-reduce, operator precedence, and LR parsing.

Uploaded by

omkarkoratkar28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

B.2A) Explain the concept of a symbol table in compiler B.

2D)Explain the role of finite automata in lexical analyzer


design. Discuss its importance and the information it stores and differentiate between NFA and DFA
• Symbol Table in Compiler Design - A symbol table is a Role of Finite Automata in Lexical Analyzer: A lexical
crucial data structure used in compiler design to store analyzer (or lexer) is like a "scanner" that reads the source
information about identifiers (such as variables, functions, code and breaks it down into small pieces, called tokens
objects, classes, etc.) used in the source code of a program. (like keywords, identifiers, numbers, etc) To identify these
•Concept of Symbol Table - The symbol table acts like a tokens, the lexical analyzer uses something called Finite
dictionary or lookup table where each identifier is Automata (FA). You can think of an FA as a simple machine
associated with various attributes relevant to its usage in that checks if the input matches a pattern. //There are two
the program. It is constructed and maintained throughout main types of FA used for this task: DFA (Deterministic
different phases of compilation, especially during lexical Finite Automaton): It always knows what to do next based
analysis, syntax analysis, & semantic analysis. • Importance on the current input. NFA (Nondeterministic Finite
of Symbol Table 1) Efficient Identifier Management: Helps Automaton): It can have multiple choices for what to do
the compiler keep track of all the declared identifiers & next, or sometimes not even use an input symbol at all
ensures that each identifier is used correctly. 2)Error Difference Between NFA and DFA: // NFA -i)Can have many
Detection: Supports detection of errors like undeclared choices for the same input. ii)Can move to a new state
variables or duplicate declarations. 3)Type Checking: without reading any input called epsilon transitions iii)May
Provides type information to ensure operations are be slower, because it might explore multiple paths at once.
semantically correct. 4) Scope Handling: Maintains scope Iv)Harder to program because of multiple paths. V) Often
rules (global, local, block-level), which is essential in nested needs more states than a DFA for the same task.
functions or blocks. 5) Optimization: Useful in later stages DFA – i)Has only one choice for each input. ii)Cannot do
like intermediate code generation & optimization. • this, it must read an input symbol to move to the next
Information Stored - 1) Name of the identifier state. iii) Faster, because it only follows one path. Iv)Easier
(variable/function). 2) Type (int, float, char, etc.). 3) Scope to program because there’s always one path to follow.
(local/global). 4) Memory location or address. V)Generally uses fewer states to do the same task.
5) Function parameters (if applicable). 6) Line number or
position (optional, for error handling).
C.2B . List the compiler Construction Tools ? C.2C Write a LEX program to identify the following tokens:
*Lexical Analyzer Generators, *Parser Generators, *Syntax- “IF”, integer, “else” bi-nary nos. Also explain how this
Directed Translation Tools, * Intermediate Code Generators, program should be compiled
*Code Optimizers, *Code Generators, *Error Detection and %{
Reporting Tools, *Debugger and Profilers, *Symbol Table #include <stdio.h>
*Management Tools, *Linkers and Loaders, *Compiler %}
Frameworks, *Integrated Development Environments (IDEs) %%
---------------------------------------------------------------------------------- "IF" { printf("Token: IF\n"); }
C.3>B What is bottom-up parsing? Describe the main "else" { printf("Token: else\n"); }
techniques used in bottom-up parsing, including shift-reduce [0-9]+ { printf("Token: Integer (%s)\n", yytext); }
Bottom-up parsing is a parsing technique that builds the 0[bB][01]+ { printf("Token: Binary Number (%s)\n",
parse tree from the input (leaves) up to the start symbol. yytext); }
It works by reducing a string of input tokens step-by-step %%
into a start symbol using grammar rules in reverse. int main() {
1)Shift-Reduce Parsing - It is the most common bottom-up yylex();
method. It uses a stack & an input buffer. Two main actions: return 0;
Shift , Reduce , *Parsing continues until the input is fully }
reduced to the start symbol. 2)Operator Precedence Parsing- Open terminal Step 1: lex tokens.l (This creates a file
Used when grammar rules involve operators (like +, *, etc.). named lex.yy.c) //Step 2: gcc lex.yy.c -o lexer -ll (This
It uses precedence and associativity of operators to decide creates an executable file named lexer) // Step 3: ./lexer
when to shift or reduce. 3) LR Parsing (Left-to-right, Right (This runs your program)
most derivation) - Most powerful bottom-up technique. Type your input like IF, else, 123, 0b1010, etc.
Types: i) Simple LR (SLR) ii) Look-Ahead LR (LALR) iii) To end input:
Canonical LR , These parsers use parsing tables to decide  Press Ctrl + D (Linux/Mac)
actions (shift/reduce).  Or Ctrl + Z + Enter (Windows)

You might also like