CFE全称Common Front End, 主要把负责dart源码翻译成kernel AST,中间会经过词法分析,语法分析和语义分析,最终把分析结果写入dill文件,CFE使用Dart开发的,运行在kernel isolate中,由于生成的dill与平台无关,在x86上生成的dill在arm平台上可以加载运行

  • 编译dartl code,词法分析,语法分析和语义分析并返回结果给main isolate

    _processLoadRequest(request) {
        Compiler? compiler;
        if (incremental) {
           compiler = await lookupOrBuildNewIncrementalCompiler(..)
        } else {
           compiler = new SingleShotCompilerWrapper(..);
        }
        CompilerResult compilerResult = await compiler.compile(script!);
           SingleShotCompilerWrapper::compileInternal(Uri script) async 
              kernelForProgram(Uri source, ..)
                kernelForProgramInternal(source, ...);
                ....
    }
    
  • 编译dart soure进行词法分析,语法分析以及语义分析

    kernelForProgramInternal(..)            -- kernel_generator.dart
       generateKernelInternal(..)           -- kernel_generator_impl.dart
         KernelTarget kernelTarget;
         kernelTarget.buildComponent(..)    -- kernel_generator_impl.dart
            loader.buildBodies(..)          -- kernel_target.dart
              SourceLoader::buildBody()     -- source_loader.dart
                   // 词法分析
                   Token tokens = tokenize(..)        -- source_loader.dart
                   // 语法分析
                   DietParser parser = new DietParser
                   parser.parseUnit(tokens);
    
  • 词法分析

    Future<Null> buildBody(LibraryBuilder library) async {
        if (library is SourceLibraryBuilder) {
          // We tokenize source files twice to keep memory usage low. This is the
          // second time, and the first time was in [buildOutline] above. So this
          // time we suppress lexical errors.
    
          // 词法分析 SourceLoader::tokenize
    
          Token tokens = await tokenize(library, suppressLexicalErrors: true);
          DietListener listener = createDietListener(library);
          DietParser parser = new DietParser(listener);
    
          //语法分析
          parser.parseUnit(tokens);
          for (LibraryBuilder part in library.parts) {
            if (part.partOfLibrary != library) {
              // Part was included in multiple libraries. Skip it here.
              continue;
            }
            Token tokens = await tokenize(part as SourceLibraryBuilder,
                suppressLexicalErrors: true);
            // ignore: unnecessary_null_comparison
            if (tokens != null) {
              listener.uri = part.fileUri;
              parser.parseUnit(tokens);
            }
          }
        }
      }
    
    
    SourceLoader::tokenize(Library, ...)    --- source_loader.dart
       scan(rawBytes, )                 ---- _fe_analyzer_shared/src/scanner/scanner.dart
         Utf8BytesScanner::tokenize(rawBytes, ...)
           AbstractScanner::tokenize(rawBytes,...) {
               while (!atEndOfFile()) {
                  int next = advance()
                  ... parse tokens
                  //   void appendToken(Token token) // tokens
    
    
              }
           }
    

    Dart词法分析过程中,针对关键字单独构建DFA,更容易的识别keyword token.

    亮点:keyword state machine,构建machine table,代码见keyword_state.dart
    
     eg:对于case, catch, class, const, continue, covariant
    
    
                      |---------- table['s']  -- table['e']                = case
                      |
           ------- table['a']  -- table['t']  -- table['c'] -- table['h']  = catch
           |
    table['c'] --- table['l']  -- table['a'] -- table['s'] -- table['s']  = class
           |         
           ------- table['o']  -- table['n'] -- table['s'] -- table['t']  = const
                                      |
                                      --------- table['t'] -- table['i'] 
                                                           -- table['n']
                                                           -- table['u']
                                                           -- table['e']  = continue
    
  • 语法分析

    https://github.com/dart-lang/sdk/blob/master/pkg/_fe_analyzer_shared/lib/src/parser/parser_impl.dart

    
      /// libraryDefinition:
      ///   scriptTag?
      ///   libraryName?
      ///   importOrExport*
      ///   partDirective*
      ///   topLevelDefinition*
      /// ;
      ///
      /// partDeclaration:
      ///   partHeader topLevelDefinition*
      /// ;
      /// ```
      Token parseUnit(Token token) {
        while (!token.next!.isEof) {
          final Token start = token.next!;
          token = parseTopLevelDeclarationImpl(token, directiveState);
          listener.endTopLevelDeclaration(token.next!);
          ...
        }
    
      }
    
    
        /// ```
      /// topLevelDefinition:
      ///   classDefinition |
      ///   enumType |
      ///   typeAlias |
      ///   'external'? functionSignature ';' |
      ///   'external'? getterSignature ';' |
      ///   'external''? setterSignature ';' |
      ///   functionSignature functionBody |
      ///   returnType? 'get' identifier functionBody |
      ///   returnType? 'set' identifier formalParameterList functionBody |
      ///   ('final' | 'const') type? staticFinalDeclarationList ';' |
      ///   variableDeclaration ';'
      /// ;
      /// ```
      Token parseTopLevelDeclarationImpl(
          Token token, DirectiveContext? directiveState) {
          ...
      }
      ```