显示标签为“IM”的博文。显示所有博文
显示标签为“IM”的博文。显示所有博文

2009年3月31日星期二

关于UnixODBC的小知识

详细请看http://en.wikipedia.org/wiki/Open_Database_Connectivity
我摘出来翻译了一下

UnixODBC
Main article: unixODBC

The unixODBC project — headed, maintained and supported by Easysoft Director Nick Gorham — has become[update] the most common driver-manager for non-Microsoft Windows platforms (and for one Microsoft platform, Interix). It offered full ODBC3 support and Unicode in advance of its competitors. Most Linux distributions as of 2006[update] ship it, including Red Hat, Mandriva and Gentoo. Several vendors of commercial databases, including IBM (DB2, Informix), Oracle and SAP (Ingres) use it for their own drivers. It includes GUI support for both KDE and GNOME. Many open source projects — including OpenOffice.org and Glade — also make use of it. It builds on any platform that supports the GNU autoconf tools (in other words, most of them). For licensing, UnixODBC uses the LGPL and the GPL.
unixODBC项目 --发起,维护和支持者是 Easysoft 的 Nick Gorham
已经成为非微软平台的最通用的驱动管理
提供全部 ODBC3 支持 和 在对unicode的支持上优于他的竞争对手(也即是微软) .
绝大多数linux的发行版本从2006年以后开始使用它(ship it), 包括Redhat, Mandriva和Gentoo.
几家商业数据库厂家,包括IBM(DB2, Informix), Oracle 和 SAP 都使用它作为他们自己的启动。
它也提供了为KDE 和 GNOME 环境图形界面的支持。
许多开源项目--包括OpenOffice.org和 Glade 也在使用它。
可以在支持GNU autoconf 工具的任何操作系统平台上建立起来。
对于许可, UnixODBC 使用 LGPL 和 GPL。

2009年3月30日星期一

ejabberd 2.0.4 Installation and Operation Guide Chapter 5 Securing ejabberd 安全化ejabberd

secure --v 使安全 --金山词霸
5.1 Firewall Settings 防火墙设置

You need to take the following TCP ports in mind when configuring your firewall:
你需要在配置你的防火墙的时候注意一下tcp端口:

Port Description
5222 Standard port for Jabber/XMPP client connections, plain or STARTTLS.
Jabber/XMPP客户连接的标准接口,明文或 STARTTLS(现在还不知道什么意思)
5223 Standard port for Jabber client connections using the old SSL method.
使用老式的SSL方式连接的Jabber客户端连接的标准端口
5269 Standard port for Jabber/XMPP server connections.
Jabber/XMPP服务器连接的标准端口-(是不是就是s2s的port?)
4369 Port used by EPMD for communication between Erlang nodes.
Erlang节点之间使用(EPMD)连接的端口,在cluster中用到了,
port range Used for connections between Erlang nodes. This range is configurable.
端口范文 在Erlang节点之间连接的端口范围,这个范围是可配置的
5.2 epmd

epmd (Erlang Port Mapper Daemon) is a small name server included in Erlang/OTP and used by Erlang programs when establishing distributed Erlang communications. ejabberd needs epmd to use ejabberdctl and also when clustering ejabberd nodes. This small program is automatically started by Erlang, and is never stopped. If ejabberd is stopped, and there aren't any other Erlang programs running in the system, you can safely stop epmd if you want.
epmd是一个小型的在Erlang/OTP中包含的名字服务器(就是dnsserver),目的为建立分布式的Erlang程序提供Erlang通讯。
ejabberd中的ejabberctl(主要的配置程序,脚本)用到,当然cluster模式的ejabberd nodes,也用到了。
这个小程序是被Erlang自动启动的,并且不会停下来(管杀不管埋?)
如果ejabberd已经停了,并且运行环境中没有其他的erlang programs运行,你可以放心的停了它(epmd)
--看样子,epmd是otp环境的功能,没事别管它,是不是可以这样理解?

ejabberd runs inside an Erlang node. To communicate with ejabberd, the script ejabberdctl starts a new Erlang node and connects to the Erlang node that holds ejabberd. In order for this communication to work, epmd must be running and listening for name requests in the port 4369. You should block the port 4369 in the firewall, so only the programs in your machine can access it.

ejabberd在一个Erlang node内部运行。
为了与ejabberd通讯, ejabberctl 脚本启动一个新的Erlang 节点并且连接到包含ejabberd的Erlang节点( the Erlang node that holds ejabberd)
为了让这次通讯运行起来(work), epmd必须跑起来并且在端口4369上监听“名字请求(name requests)"。
这里要注意了,你不能将4369端口放出到公网,这个port仅供你的程序所在的机器之间访问它。




If you build a cluster of several ejabberd instances, each ejabberd instance is called an ejabberd node. Those ejabberd nodes use a special Erlang communication method to build the cluster, and EPMD is again needed listening in the port 4369. So, if you plan to build a cluster of ejabberd nodes you must open the port 4369 for the machines involved in the cluster. Remember to block the port so Internet doesn't have access to it.

如果你建立了一个含几个节点的ejabberd实例, 每个ejabberd实例会被ejabber节点调用。
这些ejabberd 节点使用一种特别的Erlang 通讯方法建立cluster, EPMD被用来监听4369.(这里用了个again,莫非强调EPMD的作用很重要哦!)。
果然,如果你计划建立一个ejabbered 节点的cluster, 你必须在cluster中的机器之间开放4369端口。
记得,别将这个端口开放给internet。(应该是安全的要求)

Once an Erlang node solved the node name of another Erlang node using EPMD and port 4369, the nodes communicate directly. The ports used in this case are random. You can limit the range of ports when starting Erlang with a command-line parameter, for example:
一旦Erlang节点完成了对另一个节点名字的解析,当然是通过EPMD还有4369, 那么节点间就可以直接通讯了。
(应该是讲,完成名字解析后,节点之间通讯时)在这种情况下端口的使用是随机的。你可以在开始Erlang时(命令行参数)限制端口的(活动)范围,例如:

erl ... -kernel inet_dist_listen_min 4370 inet_dist_listen_max 4375
--最小 4370, 最大 4375
5.3 Erlang Cookie

The Erlang cookie is a string with numbers and letters. An Erlang node reads the cookie at startup from the command-line parameter -setcookie. If not indicated, the cookie is read from the cookie file $HOME/.erlang.cookie. If this file does not exist, it is created immediately with a random cookie. Two Erlang nodes communicate only if they have the same cookie. Setting a cookie on the Erlang node allows you to structure your Erlang network and define which nodes are allowed to connect to which.
Erlang cookie是一个包含数字和字母的字串.
Erlang节点根据启动时在命令行参数 -setcookie的设定来读取cookie
如果没有提示的话,cookie将从$HOME/.erlang.cookie的cookie文件中读取。
当这个文件不存在,他会立即创建一个随机的cookie.
两个erlang节点只有在他们cookie相同的条件下才能进行通讯。
通过设定Erlang节点上的cookie可以让你建造你的Erlang网络,定义那些节点可以互联互通。

Thanks to Erlang cookies, you can prevent access to the Erlang node by mistake, for example when there are several Erlang nodes running different programs in the same machine.
应该感谢Erlang cookies, 你可以阻止犯错误的Erlang节点的访问, 例如: 当几个节点运行不同的程序(应用、功能)在相同的一部机器


Setting a secret cookie is a simple method to difficult unauthorized access to your Erlang node. However, the cookie system is not ultimately effective to prevent unauthorized access or intrusion to an Erlang node. The communication between Erlang nodes are not encrypted, so the cookie could be read sniffing the traffic on the network. The recommended way to secure the Erlang node is to block the port 4369.
设定一个保密的cookie是限制非授权访问你的Erlang节点的一个简单办法。
可是,cookie系统并不是阻止非授权访问或入侵一个Erlang节点的有效办法。
Erlang节点之间的通讯没有加密,所以这个cookie可以通过sniffing网路得到。
所以建议还是通过屏蔽4369到(公网),来保证Eralng node.

5.4 Erlang Node Name

An Erlang node may have a node name. The name can be short (if indicated with the command-line parameter -sname) or long (if indicated with the parameter -name). Starting an Erlang node with -sname limits the communication between Erlang nodes to the LAN.
一个Erlang节点有一个节点名字。
这个名字可短可长(如果用-sname来指定那就是短, 如果是-name 那就是长的)
在局域网中,一开始使用-sname来限制通讯

Using the option -sname instead of -name is a simple method to difficult unauthorized access to your Erlang node. However, it is not ultimately effective to prevent access to the Erlang node, because it may be possible to fake the fact that you are on another network using a modified version of Erlang epmd. The recommended way to secure the Erlang node is to block the port 4369.
使用-sname替换-name是一个简单的方法用来限制非授权访问。
但是它不是最终有效的阻止Erlang 节点的访问, 因为它可以伪装一个实际中的另一个节点,使用修改过的Erlang epmd.
所以建议还是通过屏蔽4369到(公网),来保证Eralng 节点.

5.5 Securing Sensible Files

ejabberd stores sensible data in the file system either in plain text or binary files. The file system permissions should be set to only allow the proper user to read, write and execute those files and directories.
ejabberd存储敏感数据在一个文件系统中明文或二进制格式。
文件系统授权只允许恰当的用户读,写,执行相应的文件或目录

ejabberd configuration file: /etc/ejabberd/ejabberd.cfg
ejabberd的配置文件: /etc/ejabberd/ejabberd.cfg
Contains the JID of administrators and passwords of external components. The backup files probably contain also this information, so it is preferable to secure the whole /etc/ejabberd/ directory.
包含管理员JID, 密码, 外部模块.
最好备份整个/etc/ejabberd目录
ejabberd service log: /var/log/ejabberd/ejabberd.log
ejabberd服务的日志: /var/log/ejabberd/ejabberd.log
Contains IP addresses of clients. If the loglevel is set to 5, it contains whole conversations and passwords. If a logrotate system is used, there may be several log files with similar information, so it is preferable to secure the whole /var/log/ejabberd/ directory.
包含用户的ip地址.
如果log记录级别为5, 它包含整个会话和密码. 如果logrotate使用, 会有几个类似日志文件含有类似的信息,最好保证整个/var/log/ejabberd目录的安全

Mnesia database spool files: /var/lib/ejabberd/db/*
Mnesia数据库spool files: /var/lib/ejabberd/db/*
The files store binary data, but some parts are still readable. The files are generated by Mnesia and their permissions cannot be set directly, so it is preferable to secure the whole /var/lib/ejabberd/db/ directory.
Erlang cookie file: /var/lib/ejabberd/.erlang.cookie
See section 5.3.
这些文件是以二进制存储的,但是其中有些部分是可读的。这些文件是Mnesia创建的并且他们的许可(permission)不能直接授权访问,最好保证这个目录的安全/var/lib/ejabbered/db/
Erlang cookie 文件:/var/lib/ejabberd/.erlang.cookie

2009年3月29日星期日

ejabberd 2.0.4 Installation and Operation Guide Chapter 6 Clustering

ejabberd 2.0.4 Installation and Operation Guide

工作需要,看了一下文檔,順便翻譯了一下,按照自己理解,不一定是直譯的,看這種學習方式可行否?

原文:
http://www.process-one.net/en/ejabberd/guide_en#htoc73
Chapter 6 Clustering
6.1 How it Works 如何工作

A Jabber domain is served by one or more ejabberd nodes. These nodes can be run on different machines that are connected via a network. They all must have the ability to connect to port 4369 of all another nodes, and must have the same magic cookie (see Erlang/OTP documentation, in other words the file ~ejabberd/.erlang.cookie must be the same on all nodes). This is needed because all nodes exchange information about connected users, s2s connections, registered services, etc…
一个jabber domain可以是1到n个ejabber节点。
这些节点可以运行在不同的机器上,当然是通过网络连接的。
他们使用4369端口互联互通,有相同的magic cookie(见 Erlang/OTP 文档,换句话说 ~ejabberd/.erlang.cookie必须是相同的在所有节点)。
所有节点交换信息比如: 连接用户,s2s 连接,注册服务等诸如此类都会用到。

Each ejabberd node has the following modules:
每一个ejabber node 有以下模块:
* router, --路由
* local router, --本地路由
* session manager, --session管理
* s2s manager. --s2s管理

6.1.1 Router 路由器(模块)

This module is the main router of Jabber packets on each node. It routes them based on their destination's domains. It uses a global routing table. The domain of the packet's destination is searched in the routing table, and if it is found, the packet is routed to the appropriate process. If not, it is sent to the s2s manager.
这个模块是运行在每一个节点的Jabber包的主路由器(main router). 路由基于他们目的域(destination domains). 使用一个通用路由表(a globalrouting table). (发送)包的目的可以从这个路由表中查找,如果找到,(发送)包就被路由到恰当的进程(process), 如果没有找到,包会被送到s2s管理器(s2smanager)

6.1.2 Local Router 本地路由器

This module routes packets which have a destination domain equal to one of this server's host names. If the destination JID has a non-empty user part, it is routed to the session manager,
otherwise it is processed depending on its content.
这个模块路由那些包, 他们的目的域(destination domain)是这台服务器的主机名。
如果目的JID(destination JID) 有一个非空用户空间(non-empty user part), 则将被路由到session 管理器那里(session manager),否则,它将按照它的内容来执行。

6.1.3 Session Manager session管理器

This module routes packets to local users. It looks up to which user resource a packet must be sent via a presence table. Then the packet is either routed to the appropriate c2s process, or stored in offline storage, or bounced back.
这个模块路由包到本地用户。 It looks up to which user resource a packet must be sent via a presence table. 然后这个包被恰当的路由到要么是c2s process, 要么存储在离线存储设备中,或者bounced back(受挫后恢复原状)

6.1.4 s2s Manager

This module routes packets to other Jabber servers. First, it checks if an opened s2s connection from the domain of the packet's source to the domain of the packet's destination exists. If that is the case, the s2s manager routes the packet to the process serving this connection, otherwise a new connection is opened.
这个模块路由包到其他的jabber服务器。
首先,他检查是否有一个打开的s2s connection,该连接从包的来源域到包的目的域。如果确实有, 那么s2s管理器就会路由包到服务这个链接的进程, 否则就新建一个链接。

6.2 Clustering Setup 群集的建立

Suppose you already configured ejabberd on one machine named (first), and you need to setup another one to make an ejabberd cluster. Then do following steps:
假设你已经配置了一个ejabberd在一个已经命名的机器上,接着你想建立另一个来构成一个ejabberd cluster. 那么按照下面的步骤:

1. Copy ~ejabberd/.erlang.cookie file from first to second.
1. 将~ejabber/.erlang.cookie文件从第一个服务器复制到第二个

(我这里是源代码安装,configure按照缺省的进行配置所以位置是
[root@localhost ejabberd]# pwd
/var/lib/ejabberd

[root@localhost ejabberd]# cat .erlang.cookie
PJNBBQWMBVEYRVSFLLMV[root@localhost ejabberd]#

(alt) You can also add `-cookie content_of_.erlang.cookie' option to all `erl' commands below.
(变通方法) 你也可以在erl 命令选项后加上 `-cookie content_of_.erlang.cookie'
2. On second run the following command as the ejabberd daemon user, in the working directory of ejabberd:
在第二个服务器上运行下列命令(ejabberd守护进程用户),在ejabberd的工作目录
erl -sname ejabberd \
-mnesia dir "/var/lib/ejabberd/" \
-mnesia extra_db_nodes "['ejabberd@first']" \
-s mnesia

This will start Mnesia serving the same database as ejabberd@first. You can check this by running the command `mnesia:info().'. You should see a lot of remote tables and a line like the following:
这将开始一个数据库(ejabberd@first)的 Mnesia服务. 你可以通过运行命令`mnesia:info().'来检查他的运行情况。
你会看到n多个远程表和类似以下行说明:
Note: the Mnesia directory may be different in your system. To know where does ejabberd expect Mnesia to be installed by default, call 4.1 without options and it will show some help, including the Mnesia database spool dir.

running db nodes = [ejabberd@first, ejabberd@second]

3. Now run the following in the same `erl' session:
现在运行一下命令在相同的 'erl' session

mnesia:change_table_copy_type(schema, node(), disc_copies).

This will create local disc storage for the database.
这将为这个数据库建立一个本地的存储。


(alt) Change storage type of the scheme table to `RAM and disc copy' on the second node via the Web Admin.
(变更办法) 可以通过Web Admin改变第二个节点的存储类型 为 `RAM and disc copy'

4. Now you can add replicas of various tables to this node with `mnesia:add_table_copy' or `mnesia:change_table_copy_type' as above (just replace `schema' with another table name and `disc_copies' can be replaced with `ram_copies' or `disc_only_copies').
现在你可以加入复制各种各样的表到这个节点,使用`mnesia:add_table_copy' 或 `mnesia:change_table_copy_type'(只要用其他的表名字替换上面'schema'和将'disc_copies'替换为'ram_copies'或'disc_only_copies'.

Which tables to replicate is very dependant on your needs, you can get some hints from the command `mnesia:info().', by looking at the size of tables and the default storage type for each table on 'first'.
复制那些表非常依赖于你的需要,你可以从命令 mnsia:info().中得到相应的信息,表大小,每个表的缺省存储类型


Replicating a table makes lookups in this table faster on this node. Writing, on the other hand, will be slower. And of course if machine with one of the replicas is down, other replicas will be used.
复制一个表,可以使查询这个表较快在这个节点,(读特性), 写,将会较慢, 当然如果某个节点正在复制工作,其他的节点会提供相应的服务。
--大意是说如果复制表到本地节点,读要快一些,相应的写就会慢一些,对于多个节点来说,一个节点正在复制,其他节点在服务中high availability.


Also section 5.3 (Table Fragmentation) of Mnesia User's Guide can be helpful.
当然可以参考Mnesia 用户指南 5.3 (表分片)
(alt) Same as in previous item, but for other tables.

5. Run `init:stop().' or just `q().' to exit from the Erlang shell. This probably can take some time if Mnesia has not yet transfered and processed all data it needed from first.
运行 init:stop() 或 q() 离开erlang shell, 这可能会花费一些时间如果Mnesia没有转换或处理数据从第一个database.

6. Now run ejabberd on second with a configuration similar as on first: you probably do not need to duplicate `acl' and `access' options because they will be taken from first; and mod_irc should be enabled only on one machine in the cluster.
现在运行ejabbered 在第二个节点类似第一个节点: 你可能不需要复制acl和access选项, mod_irc可能只运行在一个集群的其中一部机器上
You can repeat these steps for other machines supposed to serve this domain.


6.3 Service Load-Balancing 负载均衡

6.3.1 Components Load-Balancing 组件均衡

6.3.2 Domain Load-Balancing Algorithm 域均衡算法

ejabberd includes an algorithm to load balance the components that are plugged on an ejabberd cluster. It means that you can plug one or several instances of the same component on each ejabberd cluster and that the traffic will be automatically distributed.
ejabber cluster 包含了一个负载均衡的算法,也就是说,你可以插入一个或n个具有相同组件实例(instance)在每一个ejabbered cluster, 并且自动控制流量分布

The default distribution algorithm try to deliver to a local instance of a component. If several local instances are available, one instance is chosen randomly. If no instance is available locally, one instance is chosen randomly among the remote component instances.
缺省的分布算法尽量交由含有组件的本地实例来处理。 如果存在多个本地实例,将随机挑一个实例。 如果本地没有有效的实例, 将从远程的实例中挑选一个来处理。

If you need a different behaviour, you can change the load balancing behaviour with the option domain_balancing. The syntax of the option is the following:
如果你需要特别的行为,你可以使用domain_balancing选项改变load balancing行为,语法要求如下:

{domain_balancing, "component.example.com", }.

Several balancing criteria are available: 几种均衡标准

* destination: the full JID of the packet to attribute is used. 目的 packet 去的地儿
* source: the full JID of the packet from attribute is used. 源 packet 来的地儿
* bare_destination: the bare JID (without resource) of the packet to attribute is used. 裸的JID, packet要去的
* bare_source: the bare JID (without resource) of the packet from attribute is used. 裸的JID, packet 要来的

If the value corresponding to the criteria is the same, the same component instance in the cluster will be used.
如果这些值的criteria相同,在cluster中有相同组件的instance可用,(用谁都行)
6.3.3 Load-Balancing Buckets

When there is a risk of failure for a given component, domain balancing can cause service trouble. If one component is failing the service will not work correctly unless the sessions are rebalanced.

这里有一个给定组件的风险的问题,如果域均衡会导致服务混乱。

In this case, it is best to limit the problem to the sessions handled by the failing component. This is what the domain_balancing_component_number option does, making the load balancing algorithm not dynamic, but sticky on a fix number of component instances.
在这种情况下,最好是把问题限制在操控failing component的session上, 这也是domain_balancing_component_number参数的作用,就是将load balancing 算法不是动态的设定,而是和几个固定的组件实例绑定在一起。
The syntax is the following:
语法形式:
{domain_balancing_component_number, "component.example.com", N}

2009年3月24日星期二

基準測試1-Gnu dd篇(即DISK IO)

從上次學習了Fenng同學的演講,對自己以前的工作做了一下檢討,爲了做到心中有底,學習一下測試的技巧,以後工作中可能會用到,今天學習dd
dd ,或 Gnu dd主要是對系統IO進行的吞吐量throughput進行測試

1,write 測試
[root@rac2 home]# time dd if=/dev/zero of=/home/test_write bs=4k count=1000000 [[A1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 67.9907 seconds, 60.2 MB/s

real 1m11.611s
user 0m0.385s
sys 0m8.106s

另一個窗口是iostate的結果
[root@rac2 ~]# iostat 1 100
Linux 2.6.18-92.el5 (rac2) 03/25/2009

avg-cpu: %user %nice %system %iowait %steal %idle
1.17 0.00 1.07 6.89 0.00 90.88

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 18.09 593.05 7944.87 1947100 26084682

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 1.50 98.00 0.00 0.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 118.00 0.00 118928.00 0 118928

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 2.50 53.50 0.00 43.50

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 127.00 8.00 107864.00 8 107864

avg-cpu: %user %nice %system %iowait %steal %idle
1.00 0.00 17.91 81.09 0.00 0.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 121.78 0.00 113005.94 0 114136

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.51 98.49 0.00 0.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 117.00 0.00 119272.00 0 119272

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.00 99.00 0.00 0.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 117.00 0.00 119808.00 0 119808

avg-cpu: %user %nice %system %iowait %steal %idle
1.49 0.00 32.84 64.18 0.00 1.49
...

sata的disk, dell的pc, 看一樣子是 60MB/s (B=byte b=bit)

第二步:read/write throughput 測試:

[root@rac2 home]# time dd if=/home/test_write of=/home/test_rw bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 99.2975 seconds, 41.2 MB/s

real 1m39.313s
user 0m0.437s
sys 0m10.648s



iostate取中間的一段:
...
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 559.00 134520.00 24.00 134520 24

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 8.00 69.50 0.00 22.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 479.21 84483.17 1037.62 85328 1048

avg-cpu: %user %nice %system %iowait %steal %idle
0.51 0.00 13.13 78.79 0.00 7.58

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 472.00 110424.00 4608.00 110424 4608

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.99 63.68 0.00 34.33

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 109.00 512.00 108544.00 512 108544

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 1.98 60.89 0.00 36.63

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 118.00 0.00 120832.00 0 120832

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.50 48.50 0.00 50.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 116.00 0.00 117760.00 0 117760

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.50 75.50 0.00 23.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 119.00 1800.00 113680.00 1800 113680

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.50 70.35 0.00 29.15

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 185.00 0.00 90904.00 0 90904

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 2.00 49.00 0.00 48.50

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 224.00 512.00 24784.00 512 24784

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 12.94 73.13 0.00 13.43

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 534.00 126568.00 5864.00 126568 5864

...

第三步, read throughput 測試:
[root@rac2 home]# time dd if=/home/test_write of=/dev/null bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 1.82309 seconds, 2.2 GB/s

real 0m1.824s
user 0m0.306s
sys 0m1.514s

超快的結果! iostat還沒有反映過來結束了!
[root@rac2 ~]# iostat 1 100
Linux 2.6.18-92.el5 (rac2) 03/25/2009

avg-cpu: %user %nice %system %iowait %steal %idle
0.99 0.02 1.21 8.33 0.00 89.45

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 23.32 1412.55 9241.57 5677404 37144258

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 0.00 0.00 100.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 0.00 0.00 0.00 0 0

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.50 0.00 0.00 99.50

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 2.00 0.00 24.00 0 24

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 0.00 0.00 100.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 0.00 0.00 0.00 0 0

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 0.00 0.00 0.00 99.50

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.98 0.00 79.21 0 80

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 0.00 0.00 100.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 2.00 0.00 40.00 0 40

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.50 0.00 0.00 99.50

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 3.00 0.00 32.00 0 32

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 0.00 0.00 100.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 8.00 0.00 168.00 0 168

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 6.50 0.00 0.00 93.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 0.00 0.00 0.00 0 0

avg-cpu: %user %nice %system %iowait %steal %idle
9.50 0.00 41.00 0.00 0.00 49.50

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 8.00 0.00 184.00 0 184

avg-cpu: %user %nice %system %iowait %steal %idle
5.50 0.00 28.50 0.00 0.00 66.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 0.00 0.00 0.00 0 0

avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 1.00 0.00 0.00 99.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 43.00 0.00 496.00 0 496

以上是普通的pc, sata硬盤的
還沒有完以後有心得再寫!
準備在溫習一下bonnie, bonnie++
學習一下:iozone http://www.iozone.org/

參考:
http://www.dbanotes.net/database/linux_io_benchmark_tools_compare.html
http://morphis.spaces.live.com/blog/cns!10886869AD52662A!1058.trak
http://hi.baidu.com/thinkinginlamp/blog/item/3f310c336edcfcfc1a4cff8a.html

2009年3月18日星期三

CenterOS5.2 下vmware server 2.0 安裝

[root@rac1 VM]# rpm -ivh VMware-server-2.0.0-122956.x86_64.rpm
Preparing... ########################################### [100%]
1:VMware-server ########################################### [100%]

The installation of VMware Server 2.0.0 for Linux completed successfully.
You can decide to remove this software from your system at any time by
invoking the following command: "rpm -e VMware-server".

Before running VMware Server for the first time, you need to
configure it for your running kernel by invoking the
following command: "/usr/bin/vmware-config.pl".

Enjoy,

--the VMware team

[root@rac1 VM]# /usr/bin/vmware-config.pl
Making sure services for VMware Server are stopped.

Stopping VMware autostart virtual machines:
Virtual machines [FAILED]
Stopping VMware management services:
VMware Virtual Infrastructure Web Access
VMware Server Host Agent [FAILED]
Stopping VMware services:
VMware Authentication Daemon [ OK ]
Virtual machine monitor [ OK ]

You must read and accept the End User License Agreement to continue.
Press enter to display it.
...

Do you accept? (yes/no) yes

Thank you.

The bld-2.6.18-8.el5-x86_64smp-RHEL5 - vmmon module loads perfectly into the
running kernel.

The bld-2.6.18-8.el5-x86_64smp-RHEL5 - vmci module loads perfectly into the
running kernel.

The bld-2.6.18-8.el5-x86_64smp-RHEL5 - vsock module loads perfectly into the
running kernel.

Do you want networking for your virtual machines? (yes/no/help) [yes] yes

Configuring a bridged network for vmnet0.

Please specify a name for this network.
[Bridged]
Your computer has multiple ethernet network interfaces available: eth0, eth0:1.
Which one do you want to bridge to vmnet0? [eth0]

The following bridged networks have been defined:

. vmnet0 is bridged to eth0

Do you wish to configure another bridged network? (yes/no) [no]

Do you want to be able to use NAT networking in your virtual machines? (yes/no)
[yes]

Configuring a NAT network for vmnet8.

Please specify a name for this network. [NAT]

Do you want this program to probe for an unused private subnet? (yes/no/help)
[yes]

Probing for an unused private subnet (this can take some time)...

The subnet 192.168.146.0/255.255.255.0 appears to be unused.

The following NAT networks have been defined:

. vmnet8 is a NAT network on private subnet 192.168.146.0.

Do you wish to configure another NAT network? (yes/no) [no]


Do you want to be able to use host-only networking in your virtual machines?
[yes]

Configuring a host-only network for vmnet1.

Please specify a name for this network.
[HostOnly]
Do you want this program to probe for an unused private subnet? (yes/no/help)
[yes]

Probing for an unused private subnet (this can take some time)...
The subnet 192.168.176.0/255.255.255.0 appears to be unused.

The following host-only networks have been defined:

. vmnet1 is a host-only network on private subnet 192.168.176.0.

Do you wish to configure another host-only network? (yes/no) [no]

The bld-2.6.18-8.el5-x86_64smp-RHEL5 - vmnet module loads perfectly into the
running kernel.

Please specify a port for remote connections to use [902]

Do you want this program to set up permissions for your registered virtual
machines? This will be done by setting new permissions on all files found in
the "/etc/vmware/vm-list" file. [no]

Please specify a port for standard http connections to use [8222]
Please specify a port for secure http (https) connections to use [8333]

The current administrative user for VMware Server is ''. Would you like to
specify a different administrator? [no]

Using root as the VMware Server administrator.

In which directory do you want to keep your virtual machine files?
[/var/lib/vmware/Virtual Machines]

The path "/var/lib/vmware/Virtual Machines" does not exist currently. This
program is going to create it, including needed parent directories. Is this
what you want? [yes]

Do you want to enter a serial number now? (yes/no/help) [no]

Creating a new VMware VIX API installer database using the tar4 format.

Installing VMware VIX API.

In which directory do you want to install the VMware VIX API binary files?
[/usr/bin]
In which directory do you want to install the VMware VIX API library files?
[/usr/lib/vmware-vix/lib]

The path "/usr/lib/vmware-vix/lib" does not exist currently. This program is
going to create it, including needed parent directories. Is this what you want?
[yes]

In which directory do you want to install the VMware VIX API document pages?
[/usr/share/doc/vmware-vix]

The path "/usr/share/doc/vmware-vix" does not exist currently. This program is
going to create it, including needed parent directories. Is this what you want?
[yes]

The installation of VMware VIX API 1.6.0 build-122956 for Linux completed
successfully. You can decide to remove this software from your system at any
time by invoking the following command: "/usr/bin/vmware-uninstall-vix.pl".

Enjoy,

--the VMware team

Starting VMware services:
Virtual machine monitor [ OK ]
Virtual machine communication interface [ OK ]
VM communication interface socket family: [ OK ]
Virtual ethernet [ OK ]
Bridged networking on /dev/vmnet0 [ OK ]
Host-only networking on /dev/vmnet1 (background) [ OK ]
DHCP server on /dev/vmnet1 [ OK ]
Host-only networking on /dev/vmnet8 (background) [ OK ]
DHCP server on /dev/vmnet8 [ OK ]
NAT service on /dev/vmnet8 [ OK ]
VMware Server Authentication Daemon (background) [ OK ]
Shared Memory Available [ OK ]
Starting VMware management services:
VMware Server Host Agent (background) [ OK ]
VMware Virtual Infrastructure Web Access
Starting VMware autostart virtual machines:
Virtual machines [ OK ]

The configuration of VMware Server 2.0.0 build-122956 for Linux for this
running kernel completed successfully.

大體跟vmware server 1.0.x差不多

管理用web方式
缺省端口8222
能跑了,後面遇到問題繼續。

2009年3月17日星期二

XMPP RFCs

留個記號
http://xmpp.org/rfcs/
公司最近要做IM相關的開發,覺得有研究xmpp的標準的必要。
以備后查!