Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6683499
  • 博文数量: 1005
  • 博客积分: 8199
  • 博客等级: 中将
  • 技术积分: 13071
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-25 20:19
个人简介

脚踏实地、勇往直前!

文章分类

全部博文(1005)

文章存档

2020年(2)

2019年(93)

2018年(208)

2017年(81)

2016年(49)

2015年(50)

2014年(170)

2013年(52)

2012年(177)

2011年(93)

2010年(30)

分类: 大数据

2018-07-06 16:27:30


网上很多介绍安装tidb安装的都是采用安装ansbile的方式进行安装的,这个官网的推荐安装方式,不过也可以不采用ansbile的安装方式,下面介绍安装步骤。
环境:
OS:CentOS 7
拓扑:

角色

ip

Tidb server

192.168.1.118

Pd server

192.168.1.118

Tikv1

192.168.1.101

Tikv2

192.168.1.129

Tikv3

192.168.1.85


1.介质准备
官网下载

wget

wget


#
检查文件完整性,返回 ok 则正确

[root@host01 tidb]# sha256sum -c tidb-latest-linux-amd64.sha256

tidb-latest-linux-amd64.tar.gz: OK


2.解压安装
每台机器上都进行安装

将安装包解压到相应的目录,我这里是/db/目录下,

[root@host01 tidb]# tar -xvf tidb-latest-linux-amd64.tar.gz

[root@host01 tidb]# mv tidb-latest-linux-amd64 /db/tidb


3.启动PD
tidb数据库的启动顺序是PD->TIKV-TIDB,下面按照这样的顺序进行启动
先启动PD
./pd-server --name=pd1 --data-dir=/db/tidb/pd1 --client-urls="" --log-file=/db/tidb/pd.log

参数文件方式启动
./pd-server --config=/db/tidb/conf/pd.cnf
参数文件内容如下:

[[root@localhost bin]# more /db/tidb/conf/pd.cnf

name = "pd1"

data-dir = "/db/tidb/pd1"

client-urls = ""

log-file = "/db/tidb/log/pd1.log"

好像每个参数值都要带上双引号才行


4.启动tikv

启动三个tikv,分别在tikv的节点上启动

tikv1:192.168.1.101

./tikv-server --pd="192.168.1.118:2379" --addr="192.168.1.101:20160" --data-dir=/db/tidb/data/tikv --log-file=/db/tidb/log/tikv.log &

 

tikv2:192.168.1.129

./tikv-server --pd="192.168.1.118:2379" --addr="192.168.1.129:20160" --data-dir=/db/tidb/data/tikv --log-file=/db/tidb/log/tikv.log &

 

tikv3:192.168.1.101

./tikv-server --pd="192.168.1.118:2379" --addr="192.168.1.85:20160" --data-dir=/db/tidb/data/tikv --log-file=/db/tidb/log/tikv.log &

 

参数文件方式启动:

/tikv-server --config=/db/tidb/conf/tikv.cnf

 参数文件如下:

[root@localhost conf]# more tikv.cnf

addr ="192.168.1.101:20160"

data-dir="/db/tidb/data/tikv"

advertise-addr= ""

log-file="/db/tidb/log/tikv.log"

 

[pd]

endpoints = ["192.168.1.118:2379"]

[root@localhost conf]#

 

 每个tikvip不一致,需要修改相应的addr参数,指向自己的ip.

我使用上面的命令执行后,bin目录下会自动生成last_tikv.toml文件,可以直接拷贝该文件命名为my.toml,适当进行修改即可当启动配置文件使用,主要修改红色对应的参数即可,拷贝的文件名为my.toml,启动的时候使用该配置启动即可

./tikv-server --config=/db/tidb/my.toml &

[root@localhost bin]# more last_tikv.toml

log-level = "info"

log-file = "/db/tidb/log/tikv.log"

log-rotation-timespan = "24h"

[readpool.storage]

high-concurrency = 4

normal-concurrency = 4

low-concurrency = 4

max-tasks-per-worker-high = 2000

max-tasks-per-worker-normal = 2000

max-tasks-per-worker-low = 2000

stack-size = "10MB"

 

[readpool.coprocessor]

high-concurrency = 8

normal-concurrency = 8

low-concurrency = 8

max-tasks-per-worker-high = 2000

max-tasks-per-worker-normal = 2000

max-tasks-per-worker-low = 2000

stack-size = "10MB"

 

[server]

addr = "192.168.1.85:20160"

advertise-addr = ""

grpc-compression-type = "none"

grpc-concurrency = 4

grpc-concurrent-stream = 1024

grpc-raft-conn-num = 10

grpc-stream-initial-window-size = "2MB"

grpc-keepalive-time = "10s"

grpc-keepalive-timeout = "3s"

concurrent-send-snap-limit = 32

concurrent-recv-snap-limit = 32

end-point-recursion-limit = 1000

end-point-stream-channel-size = 8

end-point-batch-row-limit = 64

end-point-stream-batch-row-limit = 128

end-point-request-max-handle-duration = "1m"

snap-max-write-bytes-per-sec = "100MB"

snap-max-total-size = "0KB"

 

[server.labels]

 

[storage]

data-dir = " /db/tidb/data/tikv"

gc-ratio-threshold = 1.1

max-key-size = 4096

scheduler-notify-capacity = 10240

scheduler-concurrency = 102400

scheduler-worker-pool-size = 4

scheduler-pending-write-threshold = "100MB"

 

[pd]

endpoints = ["192.168.1.118:2379"]

 

[metric]

interval = "15s"

address = ""

job = "tikv"

 

[raftstore]

sync-log = true

prevote = false

raftdb-path = ""

capacity = "0KB"

raft-base-tick-interval = "1s"

raft-heartbeat-ticks = 2

raft-election-timeout-ticks = 10

raft-min-election-timeout-ticks = 0

raft-max-election-timeout-ticks = 0

raft-max-size-per-msg = "1MB"

raft-max-inflight-msgs = 256

raft-entry-max-size = "8MB"

raft-log-gc-tick-interval = "10s"

raft-log-gc-threshold = 50

raft-log-gc-count-limit = 73728

raft-log-gc-size-limit = "72MB"

split-region-check-tick-interval = "10s"

region-split-check-diff = "6MB"

region-compact-check-interval = "5m"

clean-stale-peer-delay = "10m"

region-compact-check-step = 100

region-compact-min-tombstones = 10000

region-compact-tombstones-percent = 30

pd-heartbeat-tick-interval = "1m"

pd-store-heartbeat-tick-interval = "10s"

snap-mgr-gc-tick-interval = "1m"

snap-gc-timeout = "4h"

lock-cf-compact-interval = "10m"

lock-cf-compact-bytes-threshold = "256MB"

notify-capacity = 40960

messages-per-tick = 4096

max-peer-down-duration = "5m"

max-leader-missing-duration = "2h"

abnormal-leader-missing-duration = "10m"

peer-stale-state-check-interval = "5m"

snap-apply-batch-size = "10MB"

consistency-check-interval = "0s"

report-region-flow-interval = "1m"

raft-store-max-leader-lease = "9s"

right-derive-when-split = true

allow-remove-leader = false

merge-max-log-gap = 10

merge-check-tick-interval = "10s"

use-delete-range = false

cleanup-import-sst-interval = "10m"

 

[coprocessor]

split-region-on-table = true

region-max-size = "144MB"

region-split-size = "96MB"

 

[rocksdb]

wal-recovery-mode = 2

wal-dir = ""

wal-ttl-seconds = 0

wal-size-limit = "0KB"

max-total-wal-size = "4GB"

max-background-jobs = 6

max-manifest-file-size = "20MB"

create-if-missing = true

max-open-files = 40960

enable-statistics = true

stats-dump-period = "10m"

compaction-readahead-size = "0KB"

info-log-max-size = "1GB"

info-log-roll-time = "0s"

info-log-keep-log-file-num = 10

info-log-dir = ""

rate-bytes-per-sec = "0KB"

bytes-per-sync = "1MB"

wal-bytes-per-sync = "512KB"

max-sub-compactions = 1

writable-file-max-buffer-size = "1MB"

use-direct-io-for-flush-and-compaction = false

enable-pipelined-write = true

 

[rocksdb.defaultcf]

block-size = "64KB"

block-cache-size = "1955MB"

disable-block-cache = false

cache-index-and-filter-blocks = true

pin-l0-filter-and-index-blocks = true

use-bloom-filter = true

whole-key-filtering = true

bloom-filter-bits-per-key = 10

block-based-bloom-filter = false

read-amp-bytes-per-bit = 0

compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]

write-buffer-size = "128MB"

max-write-buffer-number = 5

min-write-buffer-number-to-merge = 1

max-bytes-for-level-base = "512MB"

target-file-size-base = "8MB"

level0-file-num-compaction-trigger = 4

level0-slowdown-writes-trigger = 20

level0-stop-writes-trigger = 36

max-compaction-bytes = "2GB"

compaction-pri = 3

dynamic-level-bytes = false

num-levels = 7

max-bytes-for-level-multiplier = 10

compaction-style = 0

disable-auto-compactions = false

soft-pending-compaction-bytes-limit = "64GB"

hard-pending-compaction-bytes-limit = "256GB"

 

[rocksdb.writecf]

block-size = "64KB"

block-cache-size = "1173MB"

disable-block-cache = false

cache-index-and-filter-blocks = true

pin-l0-filter-and-index-blocks = true

use-bloom-filter = true

whole-key-filtering = false

bloom-filter-bits-per-key = 10

block-based-bloom-filter = false

read-amp-bytes-per-bit = 0

compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]

write-buffer-size = "128MB"

max-write-buffer-number = 5

min-write-buffer-number-to-merge = 1

max-bytes-for-level-base = "512MB"

target-file-size-base = "8MB"

level0-file-num-compaction-trigger = 4

level0-slowdown-writes-trigger = 20

level0-stop-writes-trigger = 36

max-compaction-bytes = "2GB"

compaction-pri = 3

dynamic-level-bytes = false

num-levels = 7

max-bytes-for-level-multiplier = 10

compaction-style = 0

disable-auto-compactions = false

soft-pending-compaction-bytes-limit = "64GB"

hard-pending-compaction-bytes-limit = "256GB"

 

[rocksdb.lockcf]

block-size = "16KB"

block-cache-size = "256MB"

disable-block-cache = false

cache-index-and-filter-blocks = true

pin-l0-filter-and-index-blocks = true

use-bloom-filter = true

whole-key-filtering = true

bloom-filter-bits-per-key = 10

block-based-bloom-filter = false

read-amp-bytes-per-bit = 0

compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]

write-buffer-size = "128MB"

max-write-buffer-number = 5

min-write-buffer-number-to-merge = 1

max-bytes-for-level-base = "128MB"

target-file-size-base = "8MB"

level0-file-num-compaction-trigger = 1

level0-slowdown-writes-trigger = 20

level0-stop-writes-trigger = 36

max-compaction-bytes = "2GB"

compaction-pri = 0

dynamic-level-bytes = false

num-levels = 7

max-bytes-for-level-multiplier = 10

compaction-style = 0

disable-auto-compactions = false

soft-pending-compaction-bytes-limit = "64GB"

hard-pending-compaction-bytes-limit = "256GB"

 

[rocksdb.raftcf]

block-size = "16KB"

block-cache-size = "128MB"

disable-block-cache = false

cache-index-and-filter-blocks = true

pin-l0-filter-and-index-blocks = true

use-bloom-filter = true

whole-key-filtering = true

bloom-filter-bits-per-key = 10

block-based-bloom-filter = false

read-amp-bytes-per-bit = 0

compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]

write-buffer-size = "128MB"

max-write-buffer-number = 5

min-write-buffer-number-to-merge = 1

max-bytes-for-level-base = "128MB"

target-file-size-base = "8MB"

level0-file-num-compaction-trigger = 1

level0-slowdown-writes-trigger = 20

level0-stop-writes-trigger = 36

max-compaction-bytes = "2GB"

compaction-pri = 0

dynamic-level-bytes = false

num-levels = 7

max-bytes-for-level-multiplier = 10

compaction-style = 0

disable-auto-compactions = false

soft-pending-compaction-bytes-limit = "64GB"

hard-pending-compaction-bytes-limit = "256GB"

 

[raftdb]

wal-recovery-mode = 2

wal-dir = ""

wal-ttl-seconds = 0

wal-size-limit = "0KB"

max-total-wal-size = "4GB"

max-manifest-file-size = "20MB"

create-if-missing = true

max-open-files = 40960

enable-statistics = true

stats-dump-period = "10m"

compaction-readahead-size = "0KB"

info-log-max-size = "1GB"

info-log-roll-time = "0s"

info-log-keep-log-file-num = 10

info-log-dir = ""

max-sub-compactions = 1

writable-file-max-buffer-size = "1MB"

use-direct-io-for-flush-and-compaction = false

enable-pipelined-write = true

allow-concurrent-memtable-write = false

bytes-per-sync = "1MB"

wal-bytes-per-sync = "512KB"

 

[raftdb.defaultcf]

block-size = "64KB"

block-cache-size = "256MB"

disable-block-cache = false

cache-index-and-filter-blocks = true

pin-l0-filter-and-index-blocks = true

use-bloom-filter = false

whole-key-filtering = true

bloom-filter-bits-per-key = 10

block-based-bloom-filter = false

read-amp-bytes-per-bit = 0

compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]

write-buffer-size = "128MB"

max-write-buffer-number = 5

min-write-buffer-number-to-merge = 1

max-bytes-for-level-base = "512MB"

target-file-size-base = "8MB"

level0-file-num-compaction-trigger = 4

level0-slowdown-writes-trigger = 20

level0-stop-writes-trigger = 36

max-compaction-bytes = "2GB"

compaction-pri = 0

dynamic-level-bytes = false

num-levels = 7

max-bytes-for-level-multiplier = 10

compaction-style = 0

disable-auto-compactions = false

soft-pending-compaction-bytes-limit = "64GB"

hard-pending-compaction-bytes-limit = "256GB"

 

[security]

ca-path = ""

cert-path = ""

key-path = ""

 

[import]

import-dir = "/tmp/tikv/import"

num-threads = 8

num-import-jobs = 8

num-import-sst-jobs = 2

max-prepare-duration = "5m"

region-split-size = "96MB"

stream-channel-window = 128


5.启动tidb
tidb属在的节点上启动:

./tidb-server --store=tikv --path="192.168.1.118:2379" --log-file=/db/tidb/log/tidb.log &

 

参数文件启动

./tidb-server --config=/db/tidb/conf/tidb.cnf

参数文件内如下:

[[root@localhost bin]# more /db/tidb/conf/tidb.cnf                   

store="tikv"

path="192.168.1.118:2379"

log-file="/db/tidb/log/tidb.log"

 注意每个参数值都要加上双引号


6.登陆
[root@localhost bin]# mysql -h 192.168.1.118 -P 4000 -u root -D test

2018/07/06 15:55:32.056 server.go:310: [info] [con:1] new connection 192.168.1.118:35522

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MySQL connection id is 1

Server version: 5.7.10-TiDB-v2.1.0-beta-15-g40193a3 MySQL Community Server (Apache License 2.0)

 

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

MySQL [test]> show databases;

+--------------------+

| Database           |

+--------------------+

| INFORMATION_SCHEMA |

| PERFORMANCE_SCHEMA |

| mysql              |

| test               |

+--------------------+

4 rows in set (0.00 sec)


7.创建数据库

mysql> create database hxl;

 

创建表

mysql> create table tb_test(id int,name varchar(10));

写入数据

mysql> insert into tb_test(id,name) values(1,'name1');

Query OK, 1 row affected (0.01 sec)

 

mysql> insert into tb_test(id,name) values(2,'name2');

Query OK, 1 row affected (0.01 sec)

 

mysql> select * from tb_test;

+------+-------+

| id   | name  |

+------+-------+

|    1 | name1 |

|    2 | name2 |

+------+-------+

2 rows in set (0.00 sec)


8.遇到的问题:

问题1:duplicated store address

Jul 06 10:49:20.133 ERRO fail to request: Grpc(RpcFailure(RpcStatus { status: Unknown, details: Some("duplicated store address: id:5 address:\"127.0.0.1:20160\" , already registered by id:1 address:\"127.0.0.1:20160\" ") }))

 

原因应该是多次启动tikv导致的,可以登录pd进行查看

./pd-ctl -u

>store

>store delete 5

进行删除

奇怪的是我删除成功了,再次执行store还是看到有该id,问题后续跟进。



阅读(4372) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~