代码:
# hdfs audit logging
#
hdfs.audit.logger=INFO
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=true
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
该成这个设置,是有看到审计日志了,不过是写到namenode的日志中去的……
代码:
2016-01-26 15:01:43,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true ugi=root (auth:SIMPLE) ip=/172.16.0.5 cmd=listStatus src=/tmp/hbase1.0.0/oldWALs dst=null perm=null proto=rpc
2016-01-26 15:01:43,773 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true ugi=root (auth:SIMPLE) ip=/172.16.0.5 cmd=listStatus src=/tmp/hbase1.0.0/archive dst=null perm=null proto=rpc
2016-01-26 15:01:47,116 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2016-01-26 15:01:49,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/***** cmd=listStatus src=/ dst=null perm=null proto=webhdfs
2016-01-26 15:01:50,878 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/***** cmd=listStatus src=/ dst=null perm=null proto=webhdfs
自己在慢慢折腾吧……