栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

一、Flink环境解决[安装部署|Maven|Gradle|scala打包]

一、Flink环境解决[安装部署|Maven|Gradle|scala打包]

Flink

前置:

公司要求使用gradle-scala版本。现在maven-java版本学习资料居多,我根据java版本、官网来学习scala版本。

已有的三台测试服务器:/opt/app 目录下,下面来部署Flink1.12.0

1、安装部署

flink部署分为[local本地模式、Standalone独立集群模式、Standalone-HA高可用模式、Flink-on-Yarn模式生产]

实际生产中使用Flink-On-Yarn

下载地址:Flink-1.12.0

 flink-1.12.0-bin-scala_2.12.tgz 
Flink-On-Yarn 原理

  • Session会话模式

  • Job分离模式
操作

1.集群规划:

- 服务器: node01(Master + Slave): JobManager + TaskManager

- 服务器: node02(Master +Slave): TaskManager

- 服务器: node03(Slave): TaskManager

2.上传:

#我上传的目录 /opt/software
#解压
[root@node01 software]# tar -xvf flink-1.12.0-bin-scala_2.12.tgz -C ../app/
[root@node01 software]# cd ../app/
#如果出现权限问题,需要修改权限
[root@node01 app]# chown -R root:root flink-1.12.0/
#创建软连接[建议创建:方便后续升级]
[root@node01 app]# ln -s flink-1.12.0/ flink
#配置环境变量
[root@node01 app]# vim /etc/profile
#Flink_HOME
export Flink_HOME=/opt/app/flink
export PATH=$PATH:$Flink_HOME/bin
[root@node01 app]# source /etc/profile

3.修改flink-conf.yaml

[root@node01 conf]# vim /opt/app/flink/conf/flink-conf.yaml
#修改
jobmanager.rpc.address: node01
taskmanager.numberOfTaskSlots: 2
web.submit.enable: true

jobmanager.archive.fs.dir: hdfs://node01:8020/flink/completed-jobs/
historyserver.web.address: node01
historyserver.web.port: 8082
historyserver.archive.fs.dir: hdfs://node01:8020/flink/completed-jobs/


#开启HA,使用文件系统作为快照存储
state.backend: filesystem
#启用检查点,可以将快照保存到HDFS
state.backend.fs.checkpointdir: hdfs://node01:8020/flink-checkpoints
#使用zookeeper搭建高可用
high-availability: zookeeper
# 存储JobManager的元数据到HDFS
high-availability.storageDir: hdfs://node01:8020/flink/ha/
# 配置ZK集群地址
high-availability.zookeeper.quorum: node01:2181,node02:2181,node03:2181
  • 分发以后node2上的flink-conf.yaml
jobmanager.rpc.address: node2
  • 将HDFS整合jar包放入lib目录下
因为在Flink1.8版本后,Flink官方提供的安装包里没有整合HDFS的jar

下载地址

flink-shaded-hadoop-2-uber-2.7.5-10.0.jar

4.修改masters

[root@node01 conf]# vim /opt/app/flink/conf/masters
node01:8081
node02:8081

5.修改slaves

[root@node01 conf]# vim /opt/app/flink/conf/workers
node01
node02
node03

6.添加HADOOP_CONF_DIR环境变量

[root@node01 conf]# vim /etc/profile
export HADOOP_CONF_DIR=/opt/app/hadoop/etc/hadoop

7.分发

[root@node01 app]# scp -r flink-1.12.0/ node02:/opt/app/
[root@node01 app]# scp -r flink-1.12.0/ node03:/opt/app/

 [root@node01 app]# scp  /etc/profile node02:/etc/profile
 [root@node01 app]# scp  /etc/profile node03:/etc/profile
 #node02 node03
 [root@node02 app] source /etc/profile
 [root@node03 app] source /etc/profile

8.关闭yarn的内存检查

vim /export/server/hadoop/etc/hadoop/yarn-site.xml

 
    
        yarn.nodemanager.pmem-check-enabled
        false
    
    
        yarn.nodemanager.vmem-check-enabled
        false
    
测试

1.启动zookeeper、hadoop 可以根据自己的服务器,或者参考附录三、四

2.启动flink

  • 集群启动测试【生产不使用】
[root@node01 flink]# ./bin/start-cluster.sh 
#启动历史服务器
[root@node01 flink]# ./bin/historyserver.sh  start
#访问Flink UI界面或使用jps查看
FlinkWeb:
		http://node01:8081/#/overview
Flink历史服务器:
		http://node01:8082/#/overview
  • Session会话模式【生产少量使用】

在Yarn上启动一个Flink集群,并重复使用该集群,后续提交的任务都是给该集群,资源会被一直占用,除非手动关闭该集群----适用于大量的小任务

1.在yarn上启动一个Flink集群/会话,node01上执行以下命令

./bin/yarn-session.sh -n 2 -tm 800 -s 1 -d

说明:

申请2个CPU、1600M内存

# -n 表示申请2个容器,这里指的就是多少个taskmanager

# -tm 表示每个TaskManager的内存大小

# -s 表示每个TaskManager的slots数量

# -d 表示以后台程序方式运行

2.查看UI界面

http://node01:8088/cluster

3.使用flink run提交任务:

./bin/flink run /export/server/flink/examples/batch/WordCount.jar

运行完之后可以继续运行其他的小任务

./bin/flink run /export/server/flink/examples/batch/WordCount.jar

4.通过上方的ApplicationMaster可以进入Flink的管理界面

5.关闭yarn-session:

yarn application -kill application_1609508087977_0005

Job分离模式核心:生产主要使用

针对每个Flink任务在Yarn上启动一个独立的Flink集群并运行,结束后自动关闭并释放资源,----适用于大任务

/bin/flink run -m yarn-cluster -yjm 1024 -ytm 1024 /examples/batch/WordCount.jar

# -m jobmanager的地址

# -yjm 1024 指定jobmanager的内存信息

# -ytm 1024 指定taskmanager的内存信息

2.查看UI界面

http://node1:8088/cluster

参数说明
[root@node01 flink]# ./bin/flink --help
./flink  [OPTIONS] [ARGUMENTS]

The following actions are available:

Action "run" compiles and runs a program.

  Syntax: run [OPTIONS]  
  "run" action options:
     -c,--class                Class with the program entry point
                                          ("main()" method). only needed if the
                                          JAR file does not specify the class in
                                          its manifest.
     -C,--classpath                  Adds a URL to each user code
                                          classloader  on all nodes in the
                                          cluster. The paths must specify a
                                          protocol (e.g. file://) and be
                                          accessible on all nodes (e.g. by means
                                          of a NFS share). You can use this
                                          option multiple times for specifying
                                          more than one URL. The protocol must
                                          be supported by the {@link
                                          java.net.URLClassLoader}.
     -d,--detached                        If present, runs the job in detached
                                          mode
     -n,--allowNonRestoredState           Allow to skip savepoint state that
                                          cannot be restored. You need to allow
                                          this if you removed an operator from
                                          your program that was part of the
                                          program when the savepoint was
                                          triggered.
     -p,--parallelism        The parallelism with which to run the
                                          program. Optional flag to override the
                                          default value specified in the
                                          configuration.
     -py,--python             Python script with the program entry
                                          point. The dependent resources can be
                                          configured with the `--pyFiles`
                                          option.
     -pyarch,--pyArchives            Add python archive files for job. The
                                          archive files will be extracted to the
                                          working directory of python UDF
                                          worker. Currently only zip-format is
                                          supported. For each archive file, a
                                          target directory be specified. If the
                                          target directory name is specified,
                                          the archive file will be extracted to
                                          a name can directory with the
                                          specified name. Otherwise, the archive
                                          file will be extracted to a directory
                                          with the same name of the archive
                                          file. The files uploaded via this
                                          option are accessible via relative
                                          path. '#' could be used as the
                                          separator of the archive file path and
                                          the target directory name. Comma (',')
                                          could be used as the separator to
                                          specify multiple archive files. This
                                          option can be used to upload the
                                          virtual environment, the data files
                                          used in Python UDF (e.g.: --pyArchives
                                          file:///tmp/py37.zip,file:///tmp/data.
                                          zip#data --pyExecutable
                                          py37.zip/py37/bin/python). The data
                                          files could be accessed in Python UDF,
                                          e.g.: f = open('data/data.txt', 'r').
     -pyexec,--pyExecutable          Specify the path of the python
                                          interpreter used to execute the python
                                          UDF worker (e.g.: --pyExecutable
                                          /usr/local/bin/python3). The python
                                          UDF worker depends on Python 3.5+,
                                          Apache Beam (version == 2.23.0), Pip
                                          (version >= 7.1.0) and SetupTools
                                          (version >= 37.0.0). Please ensure
                                          that the specified environment meets
                                          the above requirements.
     -pyfs,--pyFiles         Attach custom python files for job.
                                          These files will be added to the
                                          PYTHonPATH of both the local client
                                          and the remote python UDF worker. The
                                          standard python resource file suffixes
                                          such as .py/.egg/.zip or directory are
                                          all supported. Comma (',') could be
                                          used as the separator to specify
                                          multiple files (e.g.: --pyFiles
                                          file:///tmp/myresource.zip,hdfs:///$na
                                          menode_address/myresource2.zip).
     -pym,--pyModule        Python module with the program entry
                                          point. This option must be used in
                                          conjunction with `--pyFiles`.
     -pyreq,--pyRequirements         Specify a requirements.txt file which
                                          defines the third-party dependencies.
                                          These dependencies will be installed
                                          and added to the PYTHonPATH of the
                                          python UDF worker. A directory which
                                          contains the installation packages of
                                          these dependencies could be specified
                                          optionally. Use '#' as the separator
                                          if the optional parameter exists
                                          (e.g.: --pyRequirements
                                          file:///tmp/requirements.txt#file:///t
                                          mp/cached_dir).
     -s,--fromSavepoint    Path to a savepoint to restore the job
                                          from (for example
                                          hdfs:///flink/savepoint-1537).
     -sae,--shutdownonAttachedExit        If the job is submitted in attached
                                          mode, perform a best-effort cluster
                                          shutdown when the CLI is terminated
                                          abruptly, e.g., in response to a user
                                          interrupt, such as typing Ctrl + C.
  Options for Generic CLI mode:
     -D    Allows specifying multiple generic configuration
                           options. The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor    DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session".
     -t,--target      The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. For the "run" action the currently available
                           targets are: "remote", "local", "kubernetes-session",
                           "yarn-per-job", "yarn-session". For the
                           "run-application" action the currently available
                           targets are: "kubernetes-application",
                           "yarn-application".

  Options for yarn-cluster mode:
     -d,--detached                        If present, runs the job in detached
                                          mode
     -m,--jobmanager                 Set to yarn-cluster to use YARN
                                          execution mode.
     -yat,--yarnapplicationType      Set a custom application type for the
                                          application on YARN
     -yD                  use value for given property
     -yd,--yarndetached                   If present, runs the job in detached
                                          mode (deprecated; use non-YARN
                                          specific option instead)
     -yh,--yarnhelp                       Help for the Yarn session CLI.
     -yid,--yarnapplicationId        Attach to running YARN session
     -yj,--yarnjar                   Path to Flink jar file
     -yjm,--yarnjobManagerMemory     Memory for JobManager Container with
                                          optional unit (default: MB)
     -ynl,--yarnnodeLabel            Specify YARN node label for the YARN
                                          application
     -ynm,--yarnname                 Set a custom name for the application
                                          on YARN
     -yq,--yarnquery                      Display available YARN resources
                                          (memory, cores)
     -yqu,--yarnqueue                Specify YARN queue.
     -ys,--yarnslots                 Number of slots per TaskManager
     -yt,--yarnship                  Ship files in the specified directory
                                          (t for transfer)
     -ytm,--yarntaskManagerMemory    Memory per TaskManager Container with
                                          optional unit (default: MB)
     -yz,--yarnzookeeperNamespace    Namespace to create the Zookeeper
                                          sub-paths for high availability mode
     -z,--zookeeperNamespace         Namespace to create the Zookeeper
                                          sub-paths for high availability mode

  Options for default mode:
     -D              Allows specifying multiple generic
                                     configuration options. The available
                                     options can be found at
                                     https://ci.apache.org/projects/flink/flink-
                                     docs-stable/ops/config.html
     -m,--jobmanager            Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration. Attention: This
                                     option is respected only if the
                                     high-availability configuration is NONE.
     -z,--zookeeperNamespace    Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "run-application" runs an application in Application Mode.

  Syntax: run-application [OPTIONS]  
  Options for Generic CLI mode:
     -D    Allows specifying multiple generic configuration
                           options. The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor    DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session".
     -t,--target      The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. For the "run" action the currently available
                           targets are: "remote", "local", "kubernetes-session",
                           "yarn-per-job", "yarn-session". For the
                           "run-application" action the currently available
                           targets are: "kubernetes-application",
                           "yarn-application".



Action "info" shows the optimized execution plan of the program (JSON).

  Syntax: info [OPTIONS]  
  "info" action options:
     -c,--class            Class with the program entry point
                                      ("main()" method). only needed if the JAR
                                      file does not specify the class in its
                                      manifest.
     -p,--parallelism    The parallelism with which to run the
                                      program. Optional flag to override the
                                      default value specified in the
                                      configuration.


Action "list" lists running and scheduled programs.

  Syntax: list [OPTIONS]
  "list" action options:
     -a,--all         Show all programs and their JobIDs
     -r,--running     Show only running programs and their JobIDs
     -s,--scheduled   Show only scheduled programs and their JobIDs
  Options for Generic CLI mode:
     -D    Allows specifying multiple generic configuration
                           options. The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor    DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session".
     -t,--target      The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. For the "run" action the currently available
                           targets are: "remote", "local", "kubernetes-session",
                           "yarn-per-job", "yarn-session". For the
                           "run-application" action the currently available
                           targets are: "kubernetes-application",
                           "yarn-application".

  Options for yarn-cluster mode:
     -m,--jobmanager             Set to yarn-cluster to use YARN execution
                                      mode.
     -yid,--yarnapplicationId    Attach to running YARN session
     -z,--zookeeperNamespace     Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -D              Allows specifying multiple generic
                                     configuration options. The available
                                     options can be found at
                                     https://ci.apache.org/projects/flink/flink-
                                     docs-stable/ops/config.html
     -m,--jobmanager            Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration. Attention: This
                                     option is respected only if the
                                     high-availability configuration is NONE.
     -z,--zookeeperNamespace    Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "stop" stops a running program with a savepoint (streaming jobs only).

  Syntax: stop [OPTIONS] 
  "stop" action options:
     -d,--drain                           Send MAX_WATERMARK before taking the
                                          savepoint and stopping the pipelne.
     -p,--savepointPath    Path to the savepoint (for example
                                          hdfs:///flink/savepoint-1537). If no
                                          directory is specified, the configured
                                          default will be used
                                          ("state.savepoints.dir").
  Options for Generic CLI mode:
     -D    Allows specifying multiple generic configuration
                           options. The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor    DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session".
     -t,--target      The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. For the "run" action the currently available
                           targets are: "remote", "local", "kubernetes-session",
                           "yarn-per-job", "yarn-session". For the
                           "run-application" action the currently available
                           targets are: "kubernetes-application",
                           "yarn-application".

  Options for yarn-cluster mode:
     -m,--jobmanager             Set to yarn-cluster to use YARN execution
                                      mode.
     -yid,--yarnapplicationId    Attach to running YARN session
     -z,--zookeeperNamespace     Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -D              Allows specifying multiple generic
                                     configuration options. The available
                                     options can be found at
                                     https://ci.apache.org/projects/flink/flink-
                                     docs-stable/ops/config.html
     -m,--jobmanager            Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration. Attention: This
                                     option is respected only if the
                                     high-availability configuration is NONE.
     -z,--zookeeperNamespace    Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "cancel" cancels a running program.

  Syntax: cancel [OPTIONS] 
  "cancel" action options:
     -s,--withSavepoint    **DEPRECATION WARNING**: Cancelling
                                            a job with savepoint is deprecated.
                                            Use "stop" instead.
                                            Trigger savepoint and cancel job.
                                            The target directory is optional. If
                                            no directory is specified, the
                                            configured default directory
                                            (state.savepoints.dir) is used.
  Options for Generic CLI mode:
     -D    Allows specifying multiple generic configuration
                           options. The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor    DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session".
     -t,--target      The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. For the "run" action the currently available
                           targets are: "remote", "local", "kubernetes-session",
                           "yarn-per-job", "yarn-session". For the
                           "run-application" action the currently available
                           targets are: "kubernetes-application",
                           "yarn-application".

  Options for yarn-cluster mode:
     -m,--jobmanager             Set to yarn-cluster to use YARN execution
                                      mode.
     -yid,--yarnapplicationId    Attach to running YARN session
     -z,--zookeeperNamespace     Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -D              Allows specifying multiple generic
                                     configuration options. The available
                                     options can be found at
                                     https://ci.apache.org/projects/flink/flink-
                                     docs-stable/ops/config.html
     -m,--jobmanager            Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration. Attention: This
                                     option is respected only if the
                                     high-availability configuration is NONE.
     -z,--zookeeperNamespace    Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "savepoint" triggers savepoints for a running job or disposes existing ones.

  Syntax: savepoint [OPTIONS]  []
  "savepoint" action options:
     -d,--dispose        Path of savepoint to dispose.
     -j,--jarfile    Flink program JAR file.
  Options for Generic CLI mode:
     -D    Allows specifying multiple generic configuration
                           options. The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor    DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session".
     -t,--target      The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. For the "run" action the currently available
                           targets are: "remote", "local", "kubernetes-session",
                           "yarn-per-job", "yarn-session". For the
                           "run-application" action the currently available
                           targets are: "kubernetes-application",
                           "yarn-application".

  Options for yarn-cluster mode:
     -m,--jobmanager             Set to yarn-cluster to use YARN execution
                                      mode.
     -yid,--yarnapplicationId    Attach to running YARN session
     -z,--zookeeperNamespace     Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -D              Allows specifying multiple generic
                                     configuration options. The available
                                     options can be found at
                                     https://ci.apache.org/projects/flink/flink-
                                     docs-stable/ops/config.html
     -m,--jobmanager            Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration. Attention: This
                                     option is respected only if the
                                     high-availability configuration is NONE.
     -z,--zookeeperNamespace    Namespace to create the Zookeeper sub-paths
                                     for high availability mode
附录一、maven配置


    4.0.0

    com.gtja
    Flink_demo
    1.0.0

    
    
        
            aliyun
            http://maven.aliyun.com/nexus/content/groups/public/
        
        
            apache
            https://repository.apache.org/content/repositories/snapshots/
        
        
            cloudera
            https://repository.cloudera.com/artifactory/cloudera-repos/
        

        
            spring-plugin
            https://repo.spring.io/plugins-release/
        
    


    
        8
        8

        UTF-8
        UTF-8
        1.8
        1.8
        1.8
        2.12
        1.12.0

    


    
        
            org.apache.flink
            flink-clients_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-scala_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-java
            ${flink.version}
        
        
            org.apache.flink
            flink-streaming-scala_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-streaming-java_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-table-api-scala-bridge_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-table-api-java-bridge_2.12
            ${flink.version}
        

        
        
            org.apache.flink
            flink-table-planner-blink_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-table-common
            ${flink.version}
        

        

        
        
            org.apache.flink
            flink-connector-kafka_2.11
            ${flink.version}
        
        
            org.apache.flink
            flink-sql-connector-kafka_2.11
            ${flink.version}
        
        
            org.apache.flink
            flink-connector-jdbc_2.12
            ${flink.version}
        
        
            org.apache.flink
            flink-csv
            ${flink.version}
        
        
            org.apache.flink
            flink-json
            ${flink.version}
        

        
        
        
        


        
            org.pentaho
            pentaho-aggdesigner-algorithm
            5.1.5-jhyde
            test
        

        
            org.apache.bahir
            flink-connector-redis_2.11
            1.0
            
                
                    flink-streaming-java_2.11
                    org.apache.flink
                
                
                    flink-runtime_2.11
                    org.apache.flink
                
                
                    flink-core
                    org.apache.flink
                
                
                    flink-java
                    org.apache.flink
                
            
        

        
            org.apache.flink
            flink-connector-hive_2.12
            ${flink.version}
        
        
            org.apache.hive
            hive-metastore
            2.1.0
        
        
            org.apache.hive
            hive-exec
            2.1.0
        

        
            org.apache.flink
            flink-shaded-hadoop-2-uber
            2.7.5-10.0
        

        
            org.apache.hbase
            hbase-client
            2.1.0
        

        
            mysql
            mysql-connector-java

            8.0.13
        

        
        
            io.vertx
            vertx-core
            3.9.0
        
        
            io.vertx
            vertx-jdbc-client
            3.9.0
        
        
            io.vertx
            vertx-redis-client
            3.9.0
        

        
        
            org.slf4j
            slf4j-log4j12
            1.7.7
            runtime
        
        
            log4j
            log4j
            1.2.17
            runtime
        

        
            com.alibaba
            fastjson
            1.2.44
        

        
            org.projectlombok
            lombok
            1.18.2
            provided
        

        
        
        
        

    

    
        src/main/java
        
            
            
                org.apache.maven.plugins
                maven-compiler-plugin
                3.8.1
                
                    1.8
                    1.8
                    
                
            
            
                org.apache.maven.plugins
                maven-surefire-plugin
                2.22.2
                
                    false
                    true
                    
                        ***Suite.*
                    
                
            
            
            
                org.apache.maven.plugins
                maven-shade-plugin
                2.3
                
                    
                        package
                        
                            shade
                        
                        
                            
                                
                                    *:*
                                    
                                        



我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号