7

我需要获取功能块(定义和所有内容,而不仅仅是声明),以获得函数依赖图。从函数依赖图中,识别连接的组件并模块化我极其庞大的 C 代码库,一次一个文件。

问题:我需要一个 C 解析器来识别功能块,仅此而已。我们有自定义类型等,但签名去

storage_class return_type function_name ( comma separated type value pairs )
{

//some content I view as generic stuff

}

我想出的解决方案:显然,像任何理智的人一样使用 sly 和 pycparser。

pycparser 的问题:需要从其他文件编译预处理器,只是为了识别代码块。就我而言,事情达到了 6 个级别的深度。很抱歉,我无法显示实际代码。

尝试使用 Sly 编写代码:

from sly import Lexer, Parser
import re

def comment_remover(text):
    def replacer(match):
        s = match.group(0)
        if s.startswith('/'):
            return " " # note: a space and not an empty string
        else:
            return s
    pattern = re.compile(
        r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"',
        re.DOTALL | re.MULTILINE
    )
    return re.sub(pattern, replacer, text)

class CLexer(Lexer):
    ignore = ' \t\n'
    tokens = {LEXEME, PREPROP, FUNC_DECL,FUNC_DEF,LBRACE,RBRACE, SYMBOL}
    literals = {'(', ')',',','\n','<','>','-',';','&','*','=','!'}
    LBRACE = r'\{'
    RBRACE = r'\}'
    FUNC_DECL = r'[a-z]+[ \n\t]+[a-zA-Z_0-9]+[ \n\t]+[a-zA-Z_0-9]+[ \n\t]*\([a-zA-Z_\* \,\t\n]+\)[ ]*\;'
    FUNC_DEF = r'[a-zA-Z_0-9]+[ \n\t]+[a-zA-Z_0-9]+[ \n\t]*\([a-zA-Z_\* \,\t\n]+\)'
    PREPROP = r'#[a-zA-Z_][a-zA-Z0-9_\" .\<\>\/\(\)\-\+]*'
    LEXEME = r'[a-zA-Z0-9]+'
    SYMBOL = r'[-!$%^&*\(\)_+|~=`\[\]\:\"\;\'\<\>\?\,\.\/]'


    def __init__(self):
        self.nesting_level = 0
        self.lineno = 0

    @_(r'\n+')
    def newline(self, t):
        self.lineno += t.value.count('\n')

    @_(r'[-!$%^&*\(\)_+|~=`\[\]\:\"\;\'\<\>\?\,\.\/]')
    def symbol(self,t):
        t.type = 'symbol'
        return t

    def error(self, t):
        print("Illegal character '%s'" % t.value[0])
        self.index += 1

class CParser(Parser):
    # Get the token list from the lexer (required)
    tokens = CLexer.tokens

    @_('PREPROP')
    def expr(self,p):
        return p.PREPROP

    @_('FUNC_DECL')
    def expr(self,p):
        return p.FUNC_DECL

    @_('func')
    def expr(self,p):
        return p.func

    # Grammar rules and actions
    @_('FUNC_DEF LBRACE stmt RBRACE')
    def func(self, p):
        return p.func_def + p.lbrace + p.stmt + p.rbrace

    @_('LEXEME stmt')
    def stmt(self, p):
        return p.LEXEME

    @_('SYMBOL stmt')
    def stmt(self, p):
        return p.SYMBOL

    @_('empty')
    def stmt(self, p):
        return p.empty

    @_('')
    def empty(self, p):
        pass

with open('inputfile.c') as f:
    data = "".join(f.readlines())
    data = comment_remover(data)
    lexer = CLexer()
    parser = CParser()
    while True:
        try:
            result = parser.parse(lexer.tokenize(data))
            print(result)
        except EOFError:
            break

错误 :

None
None
None
.
.
.
.
None
None
yacc: Syntax error at line 1, token=PREPROP
yacc: Syntax error at line 1, token=LBRACE
yacc: Syntax error at line 1, token=PREPROP
yacc: Syntax error at line 1, token=LBRACE
yacc: Syntax error at line 1, token=PREPROP
.
.
.
.
.

输入:

#include <mycustomheader1.h> //defines type T1
#include <somedir/mycustomheader2.h> //defines type T2
#include <someotherdir/somefile.c>

MACRO_THINGY_DEFINED_IN_SOMEFILE(M1,M2) 

static T1 function_name_thats_way_too_long_than_usual(int *a, float* b, T2* c)
{

 //some code I don't even care about at this point

}

extern T2 function_name_thats_way_too_long_than_usual(int *a, char* b, T1* c)
{

 //some code I don't even care about at this point

}

期望的输出:


function1 : 

static T1 function_name_thats_way_too_long_than_usual(int *a, float* b, T2* c)
{

 //some code I don't even care about at this point

}

function2 :

extern T2 function_name_thats_way_too_long_than_usual(int *a, char* b, T1* c)
{

 //some code I don't even care about at this point

}


4

1 回答 1

5

pycparser 有一个func_defs示例可以完全满足您的需求,但是 IIUC 您在预处理方面遇到问题?

这篇文章详细描述了为什么 pycparser 需要预处理文件,以及如何设置它。如果您控制构建系统,它实际上非常容易。预处理文件后,上面提到的示例应该可以工作。

我还要注意,由于函数指针,静态查找函数依赖项并不是一个简单的问题。您也无法使用单个文件准确地执行此操作 - 这需要多文件分析。

于 2019-07-28T13:33:34.567 回答