最近Calcite的PMC进行了一次分享,演示了如何使用Calcite,视频在Youtube
对应的源码在GitHub,不过源码的LuceneQueryProcessor只是一个模版,我将实现的代码贴在下面
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.out.println("Usage: processor SQL_FILE");
System.exit(-1);
}
String sqlQuery = new String(Files.readAllBytes(Paths.get(args[0])), StandardCharsets.UTF_8);
// TODO 1. Create the root schema and type factory
CalciteSchema calciteSchema = CalciteSchema.createRootSchema(false);
RelDataTypeFactory typeFactory = new JavaTypeFactoryImpl();
// TODO 2. Create the data type for each TPC-H table
for (TpchTable tpchTable : TpchTable.values()) {
RelDataTypeFactory.Builder builder = typeFactory.builder();
for (TpchTable.Column column : tpchTable.columns) {
builder.add(column.name, typeFactory.createJavaType(column.type).getSqlTypeName());
}
String indexPath = Paths.get(DatasetIndexer.INDEX_LOCATION, "tpch", tpchTable.name()).toString();
// TODO 3. Add the TPC-H table to the schema
calciteSchema.add(tpchTable.name(), new LuceneTable(indexPath, builder.build()));
}
// TODO 4. Create an SQL parser
SqlParser parser = SqlParser.create(sqlQuery);
// TODO 5. Parse the query into an AST
SqlNode parseAst = parser.parseQuery();
// TODO 6. Print and check the AST
System.out.println("[Parsed Query]");
System.out.println(parseAst.toString());
// TODO 7. Configure and instantiate the catalog reader
CalciteConnectionConfig readerConfig = CalciteConnectionConfig.DEFAULT.set(
CalciteConnectionProperty.CASE_SENSITIVE, "false");
CatalogReader catalogReader = new CalciteCatalogReader(calciteSchema, Collections.emptyList(), typeFactory, readerConfig);
// TODO 8. Create the SQL validator using the standard operator table and default configuration
SqlValidator sqlValidator =
SqlValidatorUtil.newValidator(SqlStdOperatorTable.instance(), catalogReader, typeFactory,
SqlValidator.Config.DEFAULT);
// TODO 9. Validate the initial AST
SqlNode validateAst = sqlValidator.validate(parseAst);
System.out.println("[validateAst Query]");
System.out.println(validateAst.toString());
// TODO 10. Create the optimization cluster to maintain planning information
RelOptCluster relOptCluster = newCluster(typeFactory);
// TODO 11. Configure and instantiate the converter of the AST to Logical plan
// - No view expansion (use NOOP_EXPANDER)
// - Standard expression normalization (use StandardConvertletTable.INSTANCE)
// - Default configuration (SqlToRelConverter.config())
SqlToRelConverter sqlToRelConverter =
new SqlToRelConverter(NOOP_EXPANDER,
sqlValidator,
catalogReader,
relOptCluster,
StandardConvertletTable.INSTANCE,
SqlToRelConverter.config());
// TODO 12. Convert the valid AST into a logical plan
RelNode logicalPlan = sqlToRelConverter.convertQuery(validateAst, false, true).rel;
// TODO 13. Display the logical plan with explain attributes
System.out.println(
RelOptUtil.dumpPlan("[Logical Plan]", logicalPlan, SqlExplainFormat.TEXT, SqlExplainLevel.EXPPLAN_ATTRIBUTES)
);
// TODO 14. Initialize optimizer/planner with the necessary rules
RelOptPlanner planner = relOptCluster.getPlanner();
planner.addRule(CoreRules.PROJECT_TO_CALC);
planner.addRule(CoreRules.FILTER_TO_CALC);
planner.addRule(EnumerableRules.ENUMERABLE_TABLE_SCAN_RULE);
planner.addRule(EnumerableRules.ENUMERABLE_JOIN_RULE);
planner.addRule(EnumerableRules.ENUMERABLE_CALC_RULE);
// planner.addRule(EnumerableRules.ENUMERABLE_FILTER_RULE);
// planner.addRule(EnumerableRules.ENUMERABLE_PROJECT_RULE);
planner.addRule(EnumerableRules.ENUMERABLE_SORT_RULE);
// TODO 15. Define the type of the output plan (in this case we want a physical plan in
// EnumerableContention)
logicalPlan = planner.changeTraits(logicalPlan, logicalPlan.getTraitSet().replace(
EnumerableConvention.INSTANCE));
planner.setRoot(logicalPlan);
EnumerableRel physicalPlan = (EnumerableRel) planner.findBestExp();
// TODO 16. Start the optimization process to obtain the most efficient physical plan based on
// the provided rule set.
// TODO 17. Display the physical plan
System.out.println(
RelOptUtil.dumpPlan("[Physical Plan]", physicalPlan, SqlExplainFormat.TEXT, SqlExplainLevel.EXPPLAN_ATTRIBUTES)
);
// TODO 18. Compile generated code and obtain the executable program
Bindable
Table已经实现,上面的代码只是将其add到Schema
Schema和Table的关系如下
有了数据,就可以进行处理query的流程了:Parser -> Validate -> Converter -> Optimize



