栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

canal & maxwell

canal & maxwell

canal & maxwell 1、maxwell 1.1 在Linux上下载文件包
在Linux上下载:
curl -sLo - https://github.com/zendesk/maxwell/releases/download/v1.37.0/maxwell-1.25.0.tar.gz 
       | tar zxvf -
       
在Linux上下载:
curl -sLo - https://github.com/zendesk/maxwell/releases/download/v1.25.0/maxwell-1.25.0.tar.gz 
| tar zxvf -
cd maxwell-1.25.0
1.2 mysql 配置
1、在开启binlog要同步数据的mysql上创建maxwell数据库,用于存放maxwell元数据
CREATE DATAbase maxwell ;
2、maxwell在mysql的账号密码,并赋予权限。
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'maxwell'@'%' IDENTIFIED BY '123456kssl';
1.3 maxwell配置文件
log_level=info

producer=kafka
kafka.bootstrap.servers= 119.103.106.63:9092,114.119.127.82:9092,119.18.164.41:9092
#需要添加
kafka_topic=OPGDIdhisOrders
# mysql login info
host=192.168.122.78
user=maxwell
password=123456kssl
#需要添加 后续初始化会用
client_id=maxwell_1
producer_partition_by=table
#认证
kafka.security.protocol= SASL_PLAINTEXT
kafka.sasl.mechanism= GSSAPI
kafka.sasl.kerberos.service.name= kafka
kafka.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule  required serviceName=kafka useKeyTab=true storeKey=true keyTab="/opt/module/maxwell-1.25.0/kafka.keytab" principal="kafka@HADOOP.COM";

#binlog数据过滤
exclude_dbs=*
include_dbs=*
include_tables=customer,customer_order,basic_information,extra_info,customer_bank,bank_information,contact_information,customer_bill_pay_log,order_zhanqi
1.4 启动命令
/opt/module/maxwell-1.25.0/bin/maxwell --config  /opt/module/maxwell-1.25.0/config.properties >/dev/null 2>&1 &
备注:会根据配置文件中的过滤条件过滤数据

nohup /opt/module/maxwell-1.25.0/bin/maxwell --config  /opt/module/maxwell-1.25.0/config.properties  --filter="exclude:*.*, include:*.customer,include:*.customer_order,include:*.basic_information, include:*.extra_info, include:*.customer_bank, include:*.bank_information, include:*.contact_information,include:*.customer_bill_pay_log, include:*.order_zhanqi" 

1.5 bootstrap功能开启
bin/maxwell-bootstrap --user maxwell  --password 123456kssl --host Linux1  --database bantuandana --table customer --client_id maxwell_1


--client_id
maxwell-bootstrap不具备将数据直接导入kafka或者hbase的能力,通过--client_id指定将数据交给哪个maxwell进程处理,在maxwell的conf.properties中配置

如果不在命令行里添加filter,需要在配置文件中进行过滤,替代用法为:

自己写配置maxwell:
/opt/module/maxwell-1.25.0/bin/maxwell --config  /opt/module/maxwell-1.25.0/config.properties  --filter="exclude:*.*, include:*.customer"  >/dev/null 2>&1 &

nohup /opt/module/maxwell-1.25.0/bin/maxwell --config  /opt/module/maxwell-1.25.0/config.properties  --filter="exclude:*.*, include:*.customer,include:*.customer_order,include:*.basic_information, include:*.extra_info, include:*.customer_bank, include:*.bank_information, include:*.contact_information,include:*.customer_bill_pay_log, include:*.order_zhanqi" 

如果不在命令行里添加filter,需要在配置文件中进行过滤,替代用法为:
exclude_dbs=*
include_dbs=*
include_tables=customer,customer_order,basic_information,extra_info,customer_bank,bank_information,contact_information,customer_bill_pay_log,order_zhanqi
.*\.customer,.*\.customer_order,.*\.basic_information,.*\.extra_info,.*\.customer_bank,.*\.bank_information,.*\.contact_information,.*\.customer_bill_pay_log,.*\.order_zhanqi
1.6 问题

Maxwell 循环 bootstrap 的解决办法

参考:

https://zhuanlan.zhihu.com/p/142427247 2、canal 2.1 canal.properties配置文件

是连接kafka集群的一些配置

#################################################
######### 		common argument		#############
#################################################
# tcp bind ip
canal.ip =
# register ip to zookeeper
canal.register.ip =
canal.port = 1111156
canal.metrics.pull.port = 11112
# canal instance user/passwd
# canal.user = canal
# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458

# canal admin config
#canal.admin.manager = 127.0.0.1:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441

#canal.zkServers =linux1:2181,linux2:2181,linux3:2181
canal.zkServers =linux1:2181,linux2:2181,linux3:2181/kafka;principal=zookeeper@HADOOP.COM
# flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# tcp, kafka, RocketMQ
canal.serverMode = kafka
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true

## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60

# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = true
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false

# binlog format/image check
canal.instance.binlog.format = ROW 
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation
canal.instance.get.ddl.isolation = false

# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256

# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360

# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =

#################################################
######### 		destinations		#############
#################################################
canal.destinations = example
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 6

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml

##################################################
######### 		     MQ 		     #############
##################################################
#canal.mq.servers = 119.8.191.249:9092,119.8.171.47:9092,114.119.184.172:9092
canal.mq.servers = linux1:9092,linux2:9092,linux3:9092
canal.mq.retries = 5
#canal.mq.batchSize = 16384
canal.mq.batchSize = 5242880
#52428800 10485760
canal.mq.maxRequestSize = 10485760
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
# aliyun mq namespace
#canal.mq.namespace =

##################################################
#########     Kafka Kerberos Info    #############
##################################################
canal.mq.kafka.kerberos.enable = true
canal.mq.kafka.kerberos.krb5FilePath = /etc/krb5.conf
canal.mq.kafka.kerberos.jaasFilePath = /etc/kafka/kafka_client_jaas.conf
kafka.security.protocol = SASL_PLAINTEXT
kafka.sasl.mechanism = GSSAPI
2.2 example文件配置

example是针对不同ip的数据库配置的,如果要同步多个ip的mysql数据,可以自己创建多个example文件。官网下载的canal在修改example文件后不需要重启canal,canal服务会自动刷新。

#################################################
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0

# enable gtid use true/false
canal.instance.gtidon=false

# position info
canal.instance.master.address=192.168.322.623:3306
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=

# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=

# table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal

#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=

# username/password
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal#Rj9
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==

# table regex
# table black regex
canal.instance.filter.regex=.*\.customer,.*\.customer_order,.*\.basic_information,.*\.extra_info,.*\.customer_bank,.*\.bank_information,.*\.contact_information,.*\.customer_bill_pay_log,.*\.order_zhanqi
canal.instance.filter.black.regex=
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch

# mq config
canal.mq.topic=OPGDId
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\..*,.*\..*
canal.mq.partition=0
# hash partition config
canal.mq.partitionsNum=10
canal.mq.partitionHash=.*\..*:$pk$
#################################################
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/774159.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号