Linux 下查看指定进程环境变量信息

Linux 会为每个进程生成一个目录,保存了进程相关的大量信息。具体位置在:

/proc/${pid}

一般有如下文件:

total 0
-rw-r--r-- 1 root root 0 Dec 31 00:00 autogroup
-r-------- 1 root root 0 Dec 31 00:00 auxv
-r--r--r-- 1 root root 0 Dec 31 00:00 cgroup
--w------- 1 root root 0 Dec 31 00:00 clear_refs
-r--r--r-- 1 root root 0 Dec 31 00:00 cmdline
-rw-r--r-- 1 root root 0 Dec 31 00:00 comm
-rw-r--r-- 1 root root 0 Dec 31 00:00 coredump_filter
-r--r--r-- 1 root root 0 Dec 31 00:00 cpuset
lrwxrwxrwx 1 root root 0 Dec 31 00:00 cwd -> /root
-r-------- 1 root root 0 Dec 31 00:00 environ
lrwxrwxrwx 1 root root 0 Dec 31 00:00 exe -> /usr/bin/bash
dr-x------ 2 root root 0 Dec 31 00:00 fd
dr-x------ 2 root root 0 Dec 31 00:00 fdinfo
-r--r--r-- 1 root root 0 Dec 31 00:00 hostinfo
-r-------- 1 root root 0 Dec 31 00:00 io
-r--r--r-- 1 root root 0 Dec 31 00:00 latency
-r--r--r-- 1 root root 0 Dec 31 00:00 limits
-rw-r--r-- 1 root root 0 Dec 31 00:00 loginuid
-r--r--r-- 1 root root 0 Dec 31 00:00 maps
-rw------- 1 root root 0 Dec 31 00:00 mem
-r--r--r-- 1 root root 0 Dec 31 00:00 mountinfo
-r--r--r-- 1 root root 0 Dec 31 00:00 mounts
-r-------- 1 root root 0 Dec 31 00:00 mountstats
dr-xr-xr-x 7 root root 0 Dec 31 00:00 net
dr-x--x--x 2 root root 0 Dec 31 00:00 ns
-r--r--r-- 1 root root 0 Dec 31 00:00 numa_maps
-rw-r--r-- 1 root root 0 Dec 31 00:00 oom_adj
-r--r--r-- 1 root root 0 Dec 31 00:00 oom_score
-rw-r--r-- 1 root root 0 Dec 31 00:00 oom_score_adj
-r--r--r-- 1 root root 0 Dec 31 00:00 pagemap
-r--r--r-- 1 root root 0 Dec 31 00:00 personality
lrwxrwxrwx 1 root root 0 Dec 31 00:00 root -> /
-rw-r--r-- 1 root root 0 Dec 31 00:00 sched
-r--r--r-- 1 root root 0 Dec 31 00:00 sessionid
-r--r--r-- 1 root root 0 Dec 31 00:00 smaps
-r--r--r-- 1 root root 0 Dec 31 00:00 stack
-r--r--r-- 1 root root 0 Dec 31 00:00 stat
-r--r--r-- 1 root root 0 Dec 31 00:00 statm
-r--r--r-- 1 root root 0 Dec 31 00:00 status
-r--r--r-- 1 root root 0 Dec 31 00:00 syscall
dr-xr-xr-x 3 root root 0 Dec 31 00:00 task
-r--r--r-- 1 root root 0 Dec 31 00:00 wchan

其中的 exe 指向进程的可执行文件,cwd 指向进程的工作目录,environ 就是进程所看到的环境变量的信息了。

直接 cat 这个文件,可以看到所有的变量信息。但是这里看到的变量都是挤在一起的,没有换行。

使用 vim 打开这个文件,可以看到实际上文件中是有符号分隔的,在 vim 里展示为 ^@ 。经过搜索,该符号代表的其实是 \0,也就是字符串结尾的意思。于是使用 tr 命令稍作处理,即可得到可读性强的换行分隔的环境变量信息:

tr '\0' '\n'< /proc/${pid}/environ

MYSQL的NOW和SYSDATE函数的区别

在MySQL Performance Blog博客上看到一篇文章介绍now()和sysdate()函数。

想起很多朋友专门问在MySQL里面提供now()和sysdate()函数,都是表示取得当前时间,他们之间有什么区别。我们下面来详细看一下

首先大家可以看一下下面的一个诡异现象:

mysql> SELECT NOW(),SYSDATE();
+---------------------+---------------------+
| NOW() | SYSDATE() |
+-----------------------------+----------------------------+
| 1999-01-01 00:00:00 | 2012-12-05 09:50:03 |
+---------------------------+----------------------------+
1 row in set (0.00 sec)

很有意思吧?
sysdate()得到的时间是当前时间,而now()取出来的时间竟然是“1999-01-01 00:00:00”。
首先申明,我不是PS或者修改得来的,你看完本文,我会教你在你的MySQL上也得出这样的结果。

另外我们看一下,now()和sysdate()的另外一个区别:

mysql> SELECT NOW(), SLEEP(2), NOW();
+---------------------+----------+---------------------+
| NOW() | SLEEP(2) | NOW( |
+-----------------------------+--------------+-----------------------------+
| 2006-04-12 13:47:36 | 0 | 2006-04-12 13:47:36 |
+---------------------------+--------------+----------------------------+

mysql> SELECT SYSDATE(), SLEEP(2), SYSDATE();
+---------------------+----------+---------------------+
| SYSDATE() | SLEEP(2) | SYSDATE() |
+-----------------------------+--------------+----------------------------+
| 2006-04-12 13:47:44 | 0 | 2006-04-12 13:47:46 |
+---------------------------+--------------+----------------------------+

在使用now()的情况下,虽然我们sleep了2秒,但是大家可以看到两次now()函数输出的结果都是’2006-04-12 13:47:36′
而使用sysdate()的情况下,是两个时间’2006-04-12 13:47:44′,’2006-04-12 13:47:46’,正好相差两秒。

这个最终的原因,大家可以直接查看MySQL的reference对now()函数的解释:http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_now
我简单给大家翻译一下。
now()函数,返回的是当前的时间。但是当前的时间是怎么取的列?

首先,对于now()函数来说,它取的时间是语句开始执行的那个时间,并且在语句执行过程中,这个值都不会变。甚至于,你在执行一个存储过程或者触发器时,这个值都是一直不变的。

这也就解释了,为什么sleep了2秒以后,在SELECT NOW(), SLEEP(2), NOW();语句中,取出的时间值是同一个:’2006-04-12 13:47:36’。

 

然后:now()函数取的当前时间从哪里来?它取自mysql的一个变量”TIMESTAMP”。

很奇怪吧?

其实这个是由于MySQL的replication导致的。你可以想象一下,一个insert into  gguard values (3,now());语句在两台MySQL上插入的值是不是一样?now()如果像sysdate()一样取的是机器的系统时间,那么在MySQL的主库和备库执行同一个这样的SQL语句,主库和备库的这一条数据肯定就不一致了。

主备库不一致的问题必须要解决,两种解决方式:

1、修复这种问题。

2、不使用statement的语句级别复制,而是类似于oracle的,将数据变更记录下来,原样在备库执行一遍。

第二种方式就是大家熟知的,binlog_format=ROW的方式。第一种就是now()不使用机器系统时间,而是取mysql的变量”TIMESTAMP”值。

另外的类似的变量还包括insert_id(用于复制时,AUTO_INCREMENT的取值)等

 

利用mysqlbinlog你可以看到每个binlog event都有一个时间值。

# at 441
#121205 10:06:52 server id 5 end_log_pos 526 Query thread_id=5 exec_time=0 error_code=0
SET TIMESTAMP=1354673212.982122/*!*/;
BEGIN
/*!*/;
# at 526
#121205 10:06:52 server id 5 end_log_pos 642 Query thread_id=5 exec_time=0 error_code=0
use `test`/*!*/;
SET TIMESTAMP=1354673212.982122/*!*/;
insert into gguard values (3,now())
/*!*/;
# at 642
#121205 10:06:52 server id 5 end_log_pos 669 Xid = 26
COMMIT/*!*/;

备库复制执行时,SQL thread在做每个insert或者其他操作前首先要执行SET TIMESTAMP这样的动作,保证now()函数在statement模式下在备库和主库一样。
这里还有另外一种含义:sysdate()函数在statement模式下,主库和备库会不一致,也就是说sysdate在statement复制模式下是不安全的。

 

那么怎么实现上面的SELECT NOW(),SYSDATE();查询出来的时间不一样列,你只需要在之前执行:

SET TIMESTAMP=UNIX_TIMESTAMP('1999-01-01');
SELECT NOW(),SYSDATE();

 
体验now()和sysdate()的神秘吧 🙂

注意:

CURRENT_TIMESTAMP() LOCALTIME() LOCALTIMESTAMP()都是now()函数的同义词,不讨论。
sysdate()没有同义词。
如果你觉得now()函数就够了,你不需要每次都取当前的机器系统时间,那么你可以在MySQL启动时指定–sysdate-is-now,这样的话MySQL会把sysdate()当成now()的一个同义词。

参考:
http://www.mysqlperformanceblog.com/2012/11/28/replication-of-the-now-function-also-time-travel/

[Redis] redis-cli 命令总结

Redis提供了丰富的命令(command)对数据库和各种数据类型进行操作,这些command可以在Linux终端使用。在编程时,比如使用Redis 的Java语言包,这些命令都有对应的方法。下面将Redis提供的命令做一总结。

官网命令列表:http://redis.io/commands (英文)

1、连接操作相关的命令

  • quit:关闭连接(connection)
  • auth:简单密码认证

2、对value操作的命令

  • exists(key):确认一个key是否存在
  • del(key):删除一个key
  • type(key):返回值的类型
  • keys(pattern):返回满足给定pattern的所有key
  • randomkey:随机返回key空间的一个key
  • rename(oldname, newname):将key由oldname重命名为newname,若newname存在则删除newname表示的key
  • dbsize:返回当前数据库中key的数目
  • expire:设定一个key的活动时间(s)
  • ttl:获得一个key的活动时间
  • select(index):按索引查询
  • move(key, dbindex):将当前数据库中的key转移到有dbindex索引的数据库
  • flushdb:删除当前选择数据库中的所有key
  • flushall:删除所有数据库中的所有key

3、对String操作的命令

  • set(key, value):给数据库中名称为key的string赋予值value
  • get(key):返回数据库中名称为key的string的value
  • getset(key, value):给名称为key的string赋予上一次的value
  • mget(key1, key2,…, key N):返回库中多个string(它们的名称为key1,key2…)的value
  • setnx(key, value):如果不存在名称为key的string,则向库中添加string,名称为key,值为value
  • setex(key, time, value):向库中添加string(名称为key,值为value)同时,设定过期时间time
  • mset(key1, value1, key2, value2,…key N, value N):同时给多个string赋值,名称为key i的string赋值value i
  • msetnx(key1, value1, key2, value2,…key N, value N):如果所有名称为key i的string都不存在,则向库中添加string,名称key i赋值为value i
  • incr(key):名称为key的string增1操作
  • incrby(key, integer):名称为key的string增加integer
  • decr(key):名称为key的string减1操作
  • decrby(key, integer):名称为key的string减少integer
  • append(key, value):名称为key的string的值附加value
  • substr(key, start, end):返回名称为key的string的value的子串

4、对List操作的命令

  • rpush(key, value):在名称为key的list尾添加一个值为value的元素
  • lpush(key, value):在名称为key的list头添加一个值为value的 元素
  • llen(key):返回名称为key的list的长度
  • lrange(key, start, end):返回名称为key的list中start至end之间的元素(下标从0开始,下同)
  • ltrim(key, start, end):截取名称为key的list,保留start至end之间的元素
  • lindex(key, index):返回名称为key的list中index位置的元素
  • lset(key, index, value):给名称为key的list中index位置的元素赋值为value
  • lrem(key, count, value):删除count个名称为key的list中值为value的元素。count为0,删除所有值为value的元素,count>0从头至尾删除count个值为value的元素,count<0从尾到头删除|count|个值为value的元素。 lpop(key):返回并删除名称为key的list中的首元素 rpop(key):返回并删除名称为key的list中的尾元素 blpop(key1, key2,… key N, timeout):lpop命令的block版本。即当timeout为0时,若遇到名称为key i的list不存在或该list为空,则命令结束。如果timeout>0,则遇到上述情况时,等待timeout秒,如果问题没有解决,则对keyi+1开始的list执行pop操作。
  • brpop(key1, key2,… key N, timeout):rpop的block版本。参考上一命令。
  • rpoplpush(srckey, dstkey):返回并删除名称为srckey的list的尾元素,并将该元素添加到名称为dstkey的list的头部

5、对Set操作的命令

  • sadd(key, member):向名称为key的set中添加元素member
  • srem(key, member) :删除名称为key的set中的元素member
  • spop(key) :随机返回并删除名称为key的set中一个元素
  • smove(srckey, dstkey, member) :将member元素从名称为srckey的集合移到名称为dstkey的集合
  • scard(key) :返回名称为key的set的基数
  • sismember(key, member) :测试member是否是名称为key的set的元素
  • sinter(key1, key2,…key N) :求交集
  • sinterstore(dstkey, key1, key2,…key N) :求交集并将交集保存到dstkey的集合
  • sunion(key1, key2,…key N) :求并集
  • sunionstore(dstkey, key1, key2,…key N) :求并集并将并集保存到dstkey的集合
  • sdiff(key1, key2,…key N) :求差集
  • sdiffstore(dstkey, key1, key2,…key N) :求差集并将差集保存到dstkey的集合
  • smembers(key) :返回名称为key的set的所有元素
  • srandmember(key) :随机返回名称为key的set的一个元素

6、对zset(sorted set)操作的命令

  • zadd(key, score, member):向名称为key的zset中添加元素member,score用于排序。如果该元素已经存在,则根据score更新该元素的顺序。
  • zrem(key, member) :删除名称为key的zset中的元素member
  • zincrby(key, increment, member) :如果在名称为key的zset中已经存在元素member,则该元素的score增加increment;否则向集合中添加该元素,其score的值为increment
  • zrank(key, member) :返回名称为key的zset(元素已按score从小到大排序)中member元素的rank(即index,从0开始),若没有member元素,返回“nil”
  • zrevrank(key, member) :返回名称为key的zset(元素已按score从大到小排序)中member元素的rank(即index,从0开始),若没有member元素,返回“nil”
  • zrange(key, start, end):返回名称为key的zset(元素已按score从小到大排序)中的index从start到end的所有元素
  • zrevrange(key, start, end):返回名称为key的zset(元素已按score从大到小排序)中的index从start到end的所有元素
  • zrangebyscore(key, min, max):返回名称为key的zset中score >= min且score <= max的所有元素 zcard(key):返回名称为key的zset的基数 zscore(key, element):返回名称为key的zset中元素element的score zremrangebyrank(key, min, max):删除名称为key的zset中rank >= min且rank <= max的所有元素 zremrangebyscore(key, min, max) :删除名称为key的zset中score >= min且score <= max的所有元素
  • zunionstore / zinterstore(dstkeyN, key1,…,keyN, WEIGHTS w1,…wN, AGGREGATE SUM|MIN|MAX):对N个zset求并集和交集,并将最后的集合保存在dstkeyN中。对于集合中每一个元素的score,在进行AGGREGATE运算前,都要乘以对于的WEIGHT参数。如果没有提供WEIGHT,默认为1。默认的AGGREGATE是SUM,即结果集合中元素的score是所有集合对应元素进行SUM运算的值,而MIN和MAX是指,结果集合中元素的score是所有集合对应元素中最小值和最大值。

7、对Hash操作的命令

  • hset(key, field, value):向名称为key的hash中添加元素field<—>value
  • hget(key, field):返回名称为key的hash中field对应的value
  • hmget(key, field1, …,field N):返回名称为key的hash中field i对应的value
  • hmset(key, field1, value1,…,field N, value N):向名称为key的hash中添加元素field i<—>value i
  • hincrby(key, field, integer):将名称为key的hash中field的value增加integer
  • hexists(key, field):名称为key的hash中是否存在键为field的域
  • hdel(key, field):删除名称为key的hash中键为field的域
  • hlen(key):返回名称为key的hash中元素个数
  • hkeys(key):返回名称为key的hash中所有键
  • hvals(key):返回名称为key的hash中所有键对应的value
  • hgetall(key):返回名称为key的hash中所有的键(field)及其对应的value

8、持久化

  • save:将数据同步保存到磁盘
  • bgsave:将数据异步保存到磁盘
  • lastsave:返回上次成功将数据保存到磁盘的Unix时戳
  • shundown:将数据同步保存到磁盘,然后关闭服务

9、远程服务控制

  • info:提供服务器的信息和统计
  • monitor:实时转储收到的请求
  • slaveof:改变复制策略设置
  • config:在运行时配置Redis服务器

Redis高级应用
1、安全性
设置客户端连接后进行任何操作指定前需要密码,一个外部用户可以再一秒钟进行150W次访问,具体操作密码修改设置redis.conf里面的requirepass属性给予密码,当然我这里给的是primos
之后如果想操作可以采用登陆的时候就授权使用:
sudo /opt/java/redis/bin/redis-cli -a primos
或者是进入以后auth primos然后就可以随意操作了

2、主从复制
做这个操作的时候我准备了两个虚拟机,ip分别是192.168.15.128和192.168.15.133
通过主从复制可以允许多个slave server拥有和master server相同的数据库副本
具体配置是在slave上面配置slave
slaveof 192.168.15.128 6379
masterauth primos
如果没有主从同步那么就检查一下是不是防火墙的问题,我用的是ufw,设置一下sudo ufw allow 6379就可以了
这个时候可以通过info查看具体的情况

3、事务处理
redis对事务的支持还比较简单,redis只能保证一个client发起的事务中的命令可以连续执行,而中间不会插入其他client的命令。当一个client在一个连接中发出multi命令时,这个连接会进入一个事务的上下文,连接后续命令不会立即执行,而是先放到一个队列中,当执行exec命令时,redis会顺序的执行队列中的所有命令。
比如我下面的一个例子
set age 100
multi
set age 10
set age 20
exec
get age –这个内容就应该是20
multi
set age 20
set age 10
exec
get age –这个时候的内容就成了10,充分体现了一下按照队列顺序执行的方式
discard 取消所有事务,也就是事务回滚
不过在redis事务执行有个别错误的时候,事务不会回滚,会把不错误的内容执行,错误的内容直接放弃,目前最新的是2.6.7也有这个问题的
乐观锁
watch key如果没watch的key有改动那么outdate的事务是不能执行的

4、持久化机制
redis是一个支持持久化的内存数据库
snapshotting快照方式,默认的存储方式,默认写入dump.rdb的二进制文件中,可以配置redis在n秒内如果超过m个key被修改过就自动做快照
append-only file aof方式,使用aof时候redis会将每一次的函 数都追加到文件中,当redis重启时会重新执行文件中的保存的写命
令在内存中。
5、发布订阅消息 sbusribe publish操作,其实就类似linux下面的消息发布
6、虚拟内存的使用
可以配置vm功能,保存路径,最大内存上线,页面多少,页面大小,最大工作线程
临时修改ip地址ifconfig eth0 192.168.15.129

redis-cli参数
Usage: redis-cli [OPTIONS] [cmd [arg [arg …]]]
-h Server hostname (default: 127.0.0.1)
-p Server port (default: 6379)
-s Server socket (overrides hostname and port)
-a Password to use when connecting to the server
-r Execute specified command N times
-i When -r is used, waits seconds per command.
It is possible to specify sub-second times like -i 0.1
-n Database number
-x Read last argument from STDIN
-d Multi-bulk delimiter in for raw formatting (default: \n)
-c Enable cluster mode (follow -ASK and -MOVED redirections)
–raw Use raw formatting for replies (default when STDOUT is not a tty)
–latency Enter a special mode continuously sampling latency
–slave Simulate a slave showing commands received from the master
–pipe Transfer raw Redis protocol from stdin to server
–bigkeys Sample Redis keys looking for big keys
–eval Send an EVAL command using the Lua script at
–help Output this help and exit
–version Output version and exit

Examples:
cat /etc/passwd | redis-cli -x set mypasswd
redis-cli get mypasswd
redis-cli -r 100 lpush mylist x
redis-cli -r 100 -i 1 info | grep used_memory_human:
redis-cli –eval myscript.lua key1 key2 , arg1 arg2 arg3
(Note: when using –eval the comma separates KEYS[] from ARGV[] items)

常用命令:
1) 查看keys个数
keys * // 查看所有keys
keys prefix_* // 查看前缀为”prefix_”的所有keys

2) 清空数据库
flushdb // 清除当前数据库的所有keys
flushall // 清除所有数据库的所有keys

解决WordPress导致Apache的mod_status失效

今天申请了一个Linode账户,因为一直知道Linode的Longview很强大,于是准备体验一下。安装好之后发现自动检测到我运行了Apache,但是看不到具体的信息,因为我的Apache的Status页面没有配置好。
于是按照官方的教程进行配置,结果怎么访问都不成功。实验了多次并查找资料后发现,是我的Wordpress在捣鬼。
因为Wordpress启用了伪静态,所以所有的请求都会被重写向index.php,包括/server-status。于是直接导致了页面404。解决方法是在Wordpress的Rewrite规则中添加这一句:
RewriteRule ^(server-info|server-status) - [L]
这样如果Apache判断你的请求是server-info或server-status就会直接终止Rewrite,这样就不会将该请求重写到index.php,导致404了。

PS:在安装的时候还出现了一个小问题,就是发现在安装perl-DBD-MySQL的时候会失败,找不到依赖包。最后使用yum –enablerepo=remi install perl-DBD-MySQL的方式解决。

Linux Mount时遇到already mounted or busy

今天在处理服务器故障的时候遇到了一个奇怪的问题,在使用mount尝试挂载磁盘的时候,始终提示:
Mount: /dev/xxx already mounted or /mnt busy
奇怪的是,使用
df -h

lsof -n
都找不到任何相关信息,看起来这块磁盘既没有挂载,也没有进程在写目录。但是就是挂载不成功。
后来搜索了一下发现了一个解决办法:
首先使用
dmsetup ls
查看能否看到Device Mapper信息。如果显示类似:
[root@]# dmsetup ls
ddf1_44656c6c202020201028001510281f033832b7a2f6678dab (253, 0)
ddf1_44656c6c202020201028001510281f033832b7a2f6678dab1 (253, 1)
那么执行两次
dmsetup remove xxxxxx
命令,将这两个Mapper信息删除,检查一下:
[root@]# dmsetup ls
No devices found
此时,再挂载磁盘就不会出现问题了。

9 Awesome SSH Tricks

Sorry for the lame title. I was thinking the other day, about how awesome SSH is, and how it’s probably one of the most crucial pieces of technology that I use every single day. Here’s a list of 10 things that I think are particularly awesome and perhaps a bit off the beaten path.

Update: (2011-09-19) There are some user-submitted ssh-tricks on the wiki now! Please feel free to add your favorites. Also the hacker news thread might be helpful for some.

SSH Config

I used SSH regularly for years before I learned about the config file, that you can create at ~/.ssh/config to tell how you want ssh to behave.

Consider the following configuration example:

  Host example.com *.example.net
     User root
  Host dev.example.net dev.example.net 
     User shared
     Port 220 
  Host test.example.com 
     User root
     UserKnownHostsFile /dev/null
     StrictHostKeyChecking no
  Host t
     HostName test.example.org
  Host *
     Compression yes
     CompressionLevel 7
     Cipher blowfish
     ServerAliveInterval 600
     ControlMaster auto
     ControlPath /tmp/ssh-%r@%h:%p

I’ll cover some of the settings in the “Host *” block, which apply to all outgoing ssh connections, in other items in this post, but basically you can use this to create shortcuts with the ssh command, to control what username is used to connect to a given host, what port number, if you need to connect to an ssh daemon running on a non-standard port. See “man ssh_config” for more information.

Control Master/Control Path

This is probably the coolest thing that I know about in SSH. Set the “ControlMaster” and “ControlPath” as above in the ssh configuration. Anytime you try to connect to a host that matches that configuration a “master session” is created. Then, subsequent connections to the same host will reuse the same master connection rather than attempt to renegotiate and create a separate connection. The result is greater speed less overhead.

This can cause problems if you’ want to do port forwarding, as this must be configured on the original connection, otherwise it won’t work.

SSH Keys

While ControlMaster/ControlPath is the coolest thing you can do with SSH, key-based authentication is probably my favorite. Basically, rather than force users to authenticate with passwords, you can use a secure cryptographic method to gain (and grant) access to a system. Deposit apublic key on servers far and wide, while keeping a “private” key secure on your local machine. And it just works.

You can generate multiple keys, to make it more difficult for an intruder to gain access to multiple machines by breaching a specific key, or machine. You can specify specific keys and key files to be used when connected to specific hosts in the ssh config file (see above.) Keys can also be (optionally) encrypted locally with a pass-code, for additional security. Once I understood how secure the system is (or can be), I found my self thinking “I wish you could use this for more than just SSH.”

SSH Agent

Most people start using SSH keys because they’re easier and it means that you don’t have to enter a password every time that you want to connect to a host. But the truth is that in most cases you want to have unencrypted private keys that have meaningful access to systems because once someone has access to a copy of the private key the have full access to the system. That’s not good.

But the truth is that typing in passwords is a pain, so there’s a solution: the ssh-agent. Basically one authenticates to the ssh-agent locally, which decrypts the key and does some magic, so that then whenever the key is needed for the connecting to a host you don’t have to enter your password. ssh-agent manages the local encryption on your key for the current session.

SSH Reagent

I’m not sure where I found this amazing little function but it’s great. Typically, ssh-agents are attached to the current session, like the window manager, so that when the window manager dies, the ssh-agent loses the decrypted bits from your ssh key. That’s nice, but it also means that if you have some processes that exist outside of your window manager’s session (e.g. Screen sessions) they loose the ssh-agent and get trapped without access to an ssh-agent so you end up having to restart would-be-persistent processes, or you have to run a large number ofssh-agents which is not ideal.

Enter “ssh-reagent.” stick this in your shell configuration (e.g. ~/.bashrc or ~/.zshrc) and run ssh-reagent whenever you have an agent session running and a terminal that can’t see it.

  ssh-reagent () {
          for agent in /tmp/ssh-*/agent.*; do
                 export SSH_AUTH_SOCK=$agent
                 if ssh-add -l 2>&1 > /dev/null; then
                         echo Found working SSH Agent:
                         ssh-add -l
                         return
                 fi
         done
         echo Cannot find ssh agent - maybe you should reconnect and forward it?
  }

It’s magic.

SSHFS and SFTP

Typically we think of ssh as a way to run a command or get a prompt on a remote machine. But SSH can do a lot more than that, and the OpenSSH package that probably the most popular implementation of SSH these days has a lot of features that go beyond just “shell” access. Here are two cool ones:

SSHFS creates a mountable file system using [FUSE][] of the files located on a remote system over SSH. It’s not always very fast, but it’s simpleand works great for quick operations on local systems, where the speed issue is much less relevant.

SFTP, replaces FTP (which is plagued by security problems,) with a similar tool for transferring files between two systems that’s secure (because it works over SSH) and is just as easy to use. In fact most recent OpenSSH daemons provide SFTP access by default.

There’s more, like a full VPN solution in recent versions, secure remote file copy, port forwarding, and the list could go on.

SSH Tunnels

SSH includes the ability to connect a port on your local system to a port on a remote system, so that to applications on your local system the local port looks like a normal local port, but when accessed the service running on the remote machine responds. All traffic is really sent over ssh.

I set up an SSH tunnel for my local system to the outgoing mail server on my server. I tell my mail client to send mail to localhost server (without mail server authentication!), and it magically goes to my personal mail relay encrypted over ssh. The applications of this are nearly endless.

Keep Alive Packets

The problem: unless you’re doing something with SSH it doesn’t send any packets, and as a result the connections can be pretty resilient to network disturbances. That’s not a problem, but it does mean that unless you’re actively using an SSH session, it can go silent causing your local area network’s NAT to eat a connection that it thinks has died, but hasn’t. The solution is to set the “ServerAliveInterval [seconds]” configuration in the SSH configuration so that your ssh client sends a “dummy packet” on a regular interval so that the router thinks that the connection is active even if it’s particularly quiet. It’s good stuff.

/dev/null .known_hosts

A lot of what I do in my day job involves deploying new systems, testing something out and then destroying that installation and starting over in the same virtual machine. So my “test rigs” have a few IP addresses, I can’t readily deploy keys on these hosts, and every time I redeploy SSH’s host-key checking tells me that a different system is responding for the host, which in most cases is the symptom of some sort of security error, and in most cases knowing this is a good thing, but in some cases it can be very annoying.

These configuration values tell your SSH session to save keys to `/dev/null (i.e. drop them on the floor) and to not ask you to verify an unknown host:

 UserKnownHostsFile /dev/null
 StrictHostKeyChecking no

This probably saves me a little annoyance and minute or two every day or more, but it’s totally worth it. Don’t set these values for hosts that you actually care about.


I’m sure there are other awesome things you can do with ssh, and I’d live to hear more. Onward and Upward!

Keep an Eye on SSH Forwarding!

OpenSSH is a wonderful tool box. The main purpose is to establish encrypted connections (SSH means Secure SHell) on a remote UNIX machine and, once authenticated, to spawn a shell to perform remote administration. Running on port 22 (default), the client (ssh) and the server (sshd) exchange encrypted information (what you type and the result of your command). I’ll not review the long list of options available with SSH but let’s focus on a particular feature: tunneling.

By default, sshd (the server) has the flag AllowTcpForwarding turned on (I won’t start a debate here about this default setting). “TCP Forwarding” allows you to encapsulate any other protocol (based on TCP of course) inside an already established SSH connection. It’s very useful to increase the security of any unsecured protocol exchanging data in clear text (example: to check a mailbox via the POP3 or IMAP protocol). TCP Forwarding is also a common way to “hide” your activity on the network. Here is an example:

# ssh -f -N -L 1100:localhost:110 -f user@pop3.company.com
user@pop3.company.com's password: 
# telnet localhost 1100
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
+OK Solid POP3 server ready
quit
+OK session ended
Connection closed by foreign host.

If you want to read more about tunnels, check the following tutorial.

But the ssh client has a much more interesting feature: dynamic port forwarding. When you connect to the remote host and specify a”-D ” argument, the remote ssh server acts as a SOCKS proxy! Example:

# ssh -f -N -D 9001 user@server.company.com

Starting from now, all applications compatible with SOCKS proxies can use the proxy running on 127.0.0.1:9999! Here is an example on FireFox:

firefox-proxy

Click to enlarge

Configured like this, your FireFox will send all HTTP traffic though the remote server via the SSH session. The server will connect to the final website and send the HTTP requests. Really nice! But there are some security concerns:

  • The UNIX server will generate a lot of traffic from the Internet. There is a risk of high resources consumption (bandwidth and/or CPU).
  • As connections will originate from the UNIX server itself, the logged IP address on remote services will be the one of the UNIX server. There is a risk in case of abuse (hidden IP address).
  • By default, the SSH daemon permits all protocols to be forwarded. Some users may abuse your security policy by encapsulating unexpected protocols (Instant Messenging is a good example).

Here follows some steps to use the SSH tunnel in a safe way. First of all, if you don’t really need this feature, disable it! In /etc/ssh/sshd_config, set AllowTcpForwarding to off and restart the sshd process.

Logging

By default, the SSH daemon does not log the sessions established via a tunnel. To show them, you need to run the sshd in debug mode (-d). This is not acceptable in an operational environment. Here is a quick patch to log all outgoing sessions initiated by the sshd with a mapping to the UID (UserID). In serverloop.c, patch the function server_request_direct_tcpip() like this:

915,918d914
<  // BEGIN PATCH TunnelLogging
<  uid_t who;
<  // END PATCH
<
925,930c921,922
<  // BEGIN PATCH TunnelLogging
<  // debug("server_request_direct_tcpip: originator %s port %d, target %s port %d",
<  who = getuid();
<    logit("Tunnel: %s:%d -> %s:%d UID(%d)",
<      originator, originator_port, target, target_port, who);
<  // END PATCH
---
>  debug("server_request_direct_tcpip: originator %s port %d, target %s port %d",
>      originator, originator_port, target, target_port);

For each new TCP session, the following line will be sent to Syslog:

Feb 27 08:03:08 honey sshd[9060]: Tunnel: 127.0.0.1:51209 -> 0.channel26.facebook.com:80 UID(2349)

The patch will allow to correlate who connected and from which IP address.

Restricting the allowed ports

By default, sshd allow to forward TCP sessions to any ports. You can restrict them to specific hosts and/or ports via the PermitOpen parameter (available since release 4.4):

PermitOpen host:port
PermitOpen IPv4_addr:port
PermitOpen [IPv6_addr]:port

Another alternative is to use the local firewall – iptables – to restrict connection initiated by the UNIX server.

Restricting the allowed users or groups

Now that hosts and ports are restricted, it can be useful to restrict who can use the port forwarding feature. Back to the sshd_config man page, let’s have a look at the Match keyword:

Introduces a conditional block. If all of the criteria on the Match line are satisfied, the keywords on the following lines override those set in the global section of the config file, until either another Match line or the end of the file. The arguments to Match are one or more criteria-pattern pairs. The available criteria are User, Group, Host, and Address. Only a subset of keywords may be used on the lines following a Match keyword. Available keywords are AllowTcpForwarding, Banner, ForceCommand, GatewayPorts, GSSApiAuthentication, KbdInteractiveAuthentication, KerberosAuthentication, PasswordAuthentication, PermitOpen, RhostsRSAAuthentication, RSAAuthentication, X11DisplayOffset, X11Forwarding, and X11UseLocalHost.

Here are some example. First let’s restrict the users who are allowed to forward TCP sessions:

AllowTcpForwarding no
Match User john,andy,ted
AllowTcpForwarding yes

Or better, allow specific ports per user groups:

AllowTcpForwarding no
Match Group admins
AllowTcpForwarding yes
Match User john,andy,ted
AllowTcpForwarding yes
PermitOpen 192.168.0.1:443

With this configuration, administrators will be able to open unrestricted connections, specific users will be able to open an IMAP session to a single server and all remaining users won’t be allowed to create tunnels.

Restricting the server Internet connectivity

Finally, restrict the Internet connectivity of your server! Even if you don’t allow TCP Forwarding, it’s a good idea. A server should never have a full direct Internet connectivity. Close everything and open connectivity depending on the needs (example: download some patches via HTTP(S) from specific servers).

使用 lsyncd 同步本地和远程目录

自动同步本地服务器(或 VPS)上的目录到另一台或多台远程服务器的办法和工具有很多,最简单的办法可能是用 rsync + cron(参考:用 VPS 给博客做镜像),这种办法有个问题就是 rsync 只能在固定时间间隔里被 cron 调用,如果时间间隔设的太短,频繁 rsync 会增加服务器负担;如果时间间隔设的太长,可能数据不能及时同步。今天介绍的 lsyncd 采用了 Linux 内核(2.6.13 及以后)里的 inotify 触发机制,这种机制可以做到只有在需要(变化)的时候才去同步。lsyncd 密切监测本地服务器上的参照目录,当发现目录下有文件或目录变更后,立刻通知远程服务器,并通过 rsync 或 rsync+ssh 方式实现文件同步。lsyncd 默认同步触发条件是每20秒或者每积累到1000次写入事件就触发一次,当然,这个触发条件可以通过配置参数调整。

lsyncd 已经在 Ubuntu 的官方源里,安装很容易:

$ sudo apt-get update
$ sudo apt-get install lsyncd

lsyncd 安装后没有自动生成所需要的配置文件和目录,需要手动创建:

$ sudo mkdir /etc/lsyncd
$ sudo touch /etc/lsyncd/lsyncd.conf.lua

$ sudo mkdir /var/log/lsyncd
$ sudo touch /var/log/lsyncd/lsyncd.{log,status}

配置 lsyncd,注意 source, host, targetdir 部分,依次是本地需要同步到远程的目录(源头),远程机器的 IP,远程目录(目标):

$ sudo vi /etc/lsyncd/lsyncd.conf.lua
settings {
        logfile = "/var/log/lsyncd/lsyncd.log",
        statusFile = "/var/log/lsyncd/lsyncd.status"
}

sync {
        default.rsyncssh,
        source = "/home/vpsee/local",
        host = "192.168.2.5",
        targetdir = "/remote"
}

配置本地机器和远程机器 root 帐号无密码 ssh 登陆,并在远程机器上(假设 IP 是 192.168.2.5)创建一个 /remote 目录:

$ sudo su
# ssh-keygen -t rsa
# ssh-copy-id root@192.168.2.5

# ssh 192.168.2.5
# mkdir /remote

配置好后就可以在本地机器上启动 lsyncd 服务了,启动服务后本地机器 /home/vpsee/local 下的目录会自动同步到远程机器的 /remote 目录下:

$ sudo service lsyncd restart

除了同步本地目录到远程目录外,lsyncd 还可以轻松做到同步本地目录到本地另一目录,只要修改配置文件就可以了:

$ sudo vi /etc/lsyncd/lsyncd.conf.lua
settings {
        logfile = "/var/log/lsyncd/lsyncd.log",
        statusFile = "/var/log/lsyncd/lsyncd.status"
}

sync {
	default.rsync,
	source = "/home/vpsee/local",
	target = "/localbackup"
}

$ sudo service lsyncd restart

Nginx限制每个IP或虚拟机的并发连接数

摘自http://nginx.org/cn/docs/http/ngx_http_limit_conn_module.html

ngx_http_limit_conn_module 模块可以按照定义的键限定每个键值的连接数。特别的,可以设定单一 IP 来源的连接数。

并不是所有的连接都会被模块计数;只有那些正在被处理的请求(这些请求的头信息已被完全读入)所在的连接才会被计数。

配置范例

http {
limit_conn_zone $binary_remote_addr zone=addr:10m;

...

server {

...

location /download/ {
limit_conn addr 1;
}

指令

语法: limit_conn zone number;
默认值: —
上下文: http, server, location
指定一块已经设定的共享内存空间,以及每个给定键值的最大连接数。当连接数超过最大连接数时,服务器将会返回 503 (Service Temporarily Unavailable) 错误。比如,如下配置

limit_conn_zone $binary_remote_addr zone=addr:10m;

server {
location /download/ {
limit_conn addr 1;
}

表示,同一 IP 同一时间只允许有一个连接。

当多个 limit_conn 指令被配置时,所有的连接数限制都会生效。比如,下面配置不仅会限制单一IP来源的连接数,同时也会限制单一虚拟服务器的总连接数:

limit_conn_zone $binary_remote_addr zone=perip:10m;
limit_conn_zone $server_name zone=perserver:10m;

server {
...
limit_conn perip 10;
limit_conn perserver 100;
}

 

如果当前配置层级没有limit_conn指令,将会从更高层级继承连接限制配置。

语法: limit_conn_log_level info | notice | warn | error;
默认值:
limit_conn_log_level error;
上下文: http, server, location
这个指令出现在版本 0.8.18.
指定当连接数超过设定的最大连接数,服务器限制连接时的日志等级。

语法: limit_conn_zone $variable zone=name:size;
默认值: —
上下文: http
设定保存各个键的状态的共享内存空间的参数。键的状态中保存了当前连接数。键的值可以是特定变量的任何非空值(空值将不会被考虑)。 使用范例:

limit_conn_zone $binary_remote_addr zone=addr:10m;
这里,设置客户端的IP地址作为键。注意,这里使用的是$binary_remote_addr变量,而不是$remote_addr变量。$remote_addr变量的长度为7字节到15字节不等,而存储状态在32位平台中占用32字节或64字节,在64位平台中占用64字节。而$binary_remote_addr变量的长度是固定的4字节,存储状态在32位平台中占用32字节或64字节,在64位平台中占用64字节。一兆字节的共享内存空间可以保存3.2万个32位的状态,1.6万个64位的状态。如果共享内存空间被耗尽,服务器将会对后续所有的请求返回 503 (Service Temporarily Unavailable) 错误。

语法: limit_zone name $variable size;
默认值: —
上下文: http
这条指令在 1.1.8 版本中已经被废弃,应该使用等效的limit_conn_zone指令。该指令的语法也有变化:
limit_conn_zone $variable zone=name:size;

Apache 添加limitipconn模块并限制IP的并发连接数

安装&下载

首先到 http://dominia.org/djao/limitipconn2.html 获取最新版本的下载链接。我这里获取到的是:

http://dominia.org/djao/limit/mod_limitipconn-0.24.tar.bz2

wget http://dominia.org/djao/limit/mod_limitipconn-0.24.tar.bz2

bzip2 -d mod_limitipconn-0.24.tar.bz2

tar -xvf mod_limitipconn-0.24.tar

apxs2 -c -i -a mod_limitipconn.c

这里的apxs2是Ubuntu里面的一个可选组件,默认没有安装,需要自行使用apt-get install apache2-dev安装。

如果是CentOS,应该使用yum install httpd-devel来安装,而且安装之后的可执行程序名是apxs。

RH系的可以找httpd-devel的rpm包使用rpm命令安装,在此不再细说。

之后,检查httpd.conf或apache2.conf里面是否有LoadModule limitipconn_module modules/mod_limitipconn.so,如果有则正常,继续向下看:

###下面就是对web目录下的文件下载限制 

 #所限制的目录所在,此处表示主机的根目录
MaxConnPerIP 3 #所限制的每个IP并发连接数为3个
NoIPLimit image/* #对图片不做IP限制

 #所限制的目录所在,此处表示主机的/mp3目录
MaxConnPerIP 1 #所限制的每个IP并发连接数为1个
OnlyIPLimit audio/mpeg video #该限制只对视频和音频格式的文件

重启即可。