用了很多次的官方xtrabackup都弄不起來,只好用土法煉鋼弄一下了…
在EC2上做 直接輸出到S3,沒採用s3cmd,用s3fs直接掛載
不過s3fs的已知吃記憶體bug還是無解的樣子…
感覺只能排程作之前掛然後做完卸掉
把這個存成sh檔基本上就可以1
2
3
4
5
6
7
8
9
10
11#!/bin/bash
NOW=$(date +"%Y-%m-%d")
echo "Backup Start..."
mysqldump --databases [資料庫] \
-h [Database Endpoint] \
-u [USER] \
-P 3306 -p[密碼 跟-p沒有空格] > /home/ec2-user/db2s3/[路徑][檔名]_$NOW.sql
echo "Backup Finish"
echo "Data Zip Start..."
gzip /home/ec2-user/db2s3/[路徑][檔名]_$NOW.sql
echo "Zip Finish"
幾個計算table size的sql command1
2
3
4
5
6
7
8SELECT concat(table_schema,'.',table_name),
concat(round(table_rows/1000000,2),'M') rows,
concat(round(data_length/(1024*1024*1024),2),'G') DATA,
concat(round(index_length/(1024*1024*1024),2),'G') idx,
concat(round((data_length+index_length)/(1024*1024*1024),2),'G') total_size,
round(index_length/data_length,2) idxfrac
FROM information_schema.TABLES
ORDER BY data_length+index_length DESC LIMIT 20;
1 | SELECT table_schema AS "Database name", |