Tornado按天打印日志
网站流量上来后,日志按天甚至小时存储更方便查看和管理,而Python的logging模块也提供了TimedRotatingFileHandler来支持以不同的时间维度归档日志。
然而根据Logging HOWTO的官方指南设置后,却发现新的日志只剩下root的,Tornado内部的logger全部没有生效。
参考stackoverflow
上的一个回答,我发现下面的配置能让Tornado内部的logger也用上TimedRotatingFileHandler
:
# logging.yaml
version: 1
disable_existing_loggers: false
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
loggers:
all:
handlers: [all]
propagate: false
tornado:
handlers: [all]
propagate: false
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
all:
class: logging.handlers.TimedRotatingFileHandler
level: INFO
formatter: simple
when: midnight
filename: ./logs/server.log
root:
level: INFO
handlers: [console, all]
propagate: true
只需在Tornado的入口代码处调用:
logging.config.dictConfig(yaml.load(open('logging.yaml', 'r')))
- 如果你想按别的时间维度分割日志,修改
when
参数对应的值就可以了。 - 特别注意:当
when
的值是D
,表示由服务器启动的时间计起,每过24小时归档一次;而如果你和我一样,希望在每天的凌晨归档日志的话,可以配置为midnight
。 - Centos系统可能需要先安装python-yaml:
sudo yum install python-yaml
mysql 添加第N个从库
如果主服务器已经存在应用数据,则在进行主从复制时,做以下处理:
授权给从数据库服务器192.168.10.131
mysql> GRANT REPLICATION SLAVE ON *.* to 'rep1'@'192.168.10.131' identified by ‘password’;
(1)主数据库进行锁表操作,不让数据再进行写入动作
mysql> FLUSH TABLES WITH READ LOCK;
(2)查看主数据库状态
mysql> show master status;
(3)记录下 FILE 及 Position 的值。
将主服务器的数据文件(整个/opt/mysql/data目录)复制到从服务器,建议通过tar归档压缩后再传到从服务器解压。
或者用mysqldump的方式导出sql文件
(4)取消主数据库锁定
mysql> UNLOCK TABLES;
某项目mysql服务器 1主 1从 ,现在要添加一个mysql从服务器,要求主库不能停止服务,以前由于不是线上的服务器,可以在主服务器上 执行 flush tables with read lock 语句(锁表,只读),所有的表只能读不能写,然后再拷贝主库数据到新的从库服务器上,并保持数据一致性,现在只能换一种方法了,思路 新从库2拷贝老的从库1的数据!
老从库1 相关操作:
#1 停止 mysql从库,锁表,
记住 Read_Master_Log_Pos: 与 Master_Log_File: (红色字)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | mysql> stop slave; mysql> flush tables with read lock; mysql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.6.53 Master_User: dongnan Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000097 Read_Master_Log_Pos: 19157395 Relay_Log_File: zabbix-slave-relay-bin.000185 Relay_Log_Pos: 11573578 Relay_Master_Log_File: mysql-bin.000097 Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 19157395 Relay_Log_Space: 19142103 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec) ERROR: No query specified |
#2 打包数据并发送到新从库2服务器上
1 2 3 | mysqldump -uroot -p --databases a b c d e f > data.sql tar czvf data.tar.gz data.sql scp zabbix_20110805.tar.gz root@192.168.6.54:/root |
新从库2相关操作:
#1 更改 server-id 值不能为1,因为master 的 server-id=1
- vim /etc/my.cnf
- server-id = 3
#2 导入mysql数据库使用 --init-command="SET SQL_LOG_BIN = 0;" 参数可以避免初次导数据时产生巨大binlog文件
mysql --init-command="SET SQL_LOG_BIN = 0;" -u root -p < data.sql
#3 启动mysql数据库并change master
Exec_Master_Log_Pos 值 19157395
Master_Log_File 值 mysql-bin.000097
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | mysql> change master to master_host='192.168.6.53',master_user='dongnan',master_password='password',master_log_file='mysql-bin.000097',master_log_pos=19157395; mysql> start slave; #启动slave mysql> show slave status\G; #显示slave 状态 *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.6.53 Master_User: dongnan Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000097 Read_Master_Log_Pos: 21194359 Relay_Log_File: db1-relay-bin.000002 Relay_Log_Pos: 2037215 Relay_Master_Log_File: mysql-bin.000097 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 21194359 Relay_Log_Space: 2037368 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec) ERROR: No query specified |
验证重库是否同步:
老从库1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | mysql> unlock tables; mysql> start slave; mysql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.6.53 Master_User: dongnan Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000097 Read_Master_Log_Pos: 21194359 Relay_Log_File: db1-relay-bin.000002 Relay_Log_Pos: 2037215 Relay_Master_Log_File: mysql-bin.000097 Slave_IO_Running: Yes Slave_SQL_Running: Yes |
新从库2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | mysql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.6.53 Master_User: dongnan Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000097 Read_Master_Log_Pos: 21194359 Relay_Log_File: db1-relay-bin.000002 Relay_Log_Pos: 2037215 Relay_Master_Log_File: mysql-bin.000097 Slave_IO_Running: Yes Slave_SQL_Running: Yes |
如果遇到错误尝试停止slave设置以下参数后再启动
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
结束
既然主库不能动,那就去操作从库吧,新从库2拷贝老的从库1的数据!
补充:
主库创建同步账号: grant replication slave on *.* to 'repl'@'%' identified by 'repl';
[mysqld]
log_slave_updates=1 #此配置允许链式复制, 当你需要链式复制的时候,如A->B->C,你就必须在B主机上的mysql中添加一条配置
log_bin = /home/logs/mysql/mysql-bin
expire_logs_days = 7
max_binlog_size = 100M
binlog-ignore-db=mysql
binlog-ignore-db=information_schema
binlog_format=ROW
#从库配置
replicate-ignore-db=mysql
replicate-ignore-db=information_schema
slave-skip-errors=1064 1146 1062 1032
mysqldump -u root -pPassword --all-databases | ssh user@new_host.host.com 'cat - | mysql -u root -pPassword'