2009年12月30日星期三

(RedHat, CentOS,OEL)手動清理memory的實驗

環境是CentOS5.2

[root@test ~]# free –m
                        total       used       free     shared    buffers     cached
Mem:         15721      14056       1664          0        533      11283
-/+ buffers/cache:       2239      13481
Swap:        16386          0      16386
[root@test ~]# sync;sync;sync
[root@test~]# echo 3 > /proc/sys/vm/drop_caches
[root@test~]# free –m
                      total       used       free     shared    buffers     cached
Mem:         15721       1837      13883          0          0         44
-/+ buffers/cache:       1792      13928
Swap:        16386          0      16386
[root@LITHIUM ~]#

explain:

Writing to this file causes the kernel to drop clean caches,
dentries and inodes from memory, causing that memory to become
free.

To free pagecache, use echo 1 > /proc/sys/vm/drop_caches;

to free dentries and inodes, use echo 2 > /proc/sys/vm/drop_caches;
to free pagecache, dentries and inodes, use echo 3 > /proc/sys/vm/drop_caches.

Because this is a non-destructive operation and dirty objects
are not freeable, the user should run sync(8) first.

2009年終總結

1. 考過ocp(10g,11g)

2.考過oce(RAC expert)

3.學習了一門新的開發語言Python

4.嘗試iPhone的開發(Objective-C)

期望來年:

1. 再次嘗試一下Cissp

2. 嘗試pmp

3. Oracle 技術進一步深化

4. *nix技術進一步深化

5. 嘗試在英語上能進步一些

6.繼續在學習方法上總結

感謝家人,感謝朋友的支持。

2009年12月21日星期一

學習compare NSSTring Objects

在Objective-C和COCOA 如何比較兩個NSString 類型的字串是否相等呢?
第一個想到的是"==",但是這個不正確,應該是 isEqualToString

NSString *str1 = @"Homebrew";
NSString *str2 = @"Homebrew";

if(str1 == str2)
{
NSLog(@"str1 equals str2");
}else{
NSLog(@"Str1 does not equal str2");
}

NSLog(@"str1 address in memory : %p", str1);
NSLog(@"str2 address in memory : %p", str2);



char * cStr = "Homebrew";
NSString *str3 = [NSString stringWithUTF8String:cStr];
NSString *str4 = @"Homebrew";

if(str3 == str4)
{
NSLog(@"str3 equals to str4");
}
else{
NSLog(@"str3 does not equals to str4");
}

NSLog(@"str3 address in memory is %p", str3);
NSLog(@"str4 address in memory is %p", str4);


if([str3 isEqualToString: str4])
{
NSLog(@"str3 equals str4");
}else{
NSLog(@"str3 does not equal str4");
}


結果:
[Session started at 2009-12-22 09:58:59 +0800.]
2009-12-22 09:58:59.699 Movie_Player2[34747:10b] str1 equals str2
2009-12-22 09:58:59.710 Movie_Player2[34747:10b] str1 address in memory : 0x2030
2009-12-22 09:58:59.710 Movie_Player2[34747:10b] str2 address in memory : 0x2030
2009-12-22 09:58:59.711 Movie_Player2[34747:10b] str3 does not equals to str4
2009-12-22 09:58:59.711 Movie_Player2[34747:10b] str3 address in memory is 0x105bc0
2009-12-22 09:58:59.712 Movie_Player2[34747:10b] str4 address in memory is 0x2030
2009-12-22 09:58:59.713 Movie_Player2[34747:10b] str3 equals str4

The Debugger has exited with status 0.

參考解釋是:
The reason this works is that the complier can manage strings internally when you define them using the shortcut method(@"stringhere") and will store only one reference internally to duplicates.
You can verify that the strings refer to the same content by looking at the locations in memory where the variables are stored.

The Correct way to compare Strings
the right way to go about this is use the isEqualToString: method in the NSString class


ref:http://iphonedevelopertips.com/cocoa/compare-nsstrings-objects.html

2009年12月17日星期四

如何正常關閉mysql

一直以來都是比較野蠻的kill –9  or killall 掉mysqld的進程,終於找到一個可以說的過去的辦法就是 mysql.server stop

 

[root@localhost mysql]# ./mysql.server
Usage: ./mysql.server  {start|stop|restart|reload|force-reload|status}  [ MySQL server options ]
[root@localhost mysql]#

位置在

[root@localhost mysql]# pwd
/usr/local/mysql/share/mysql

我這裏是根據原代碼編譯的,”/user/local/mysql” 是default的目錄

參考mysql documents:

5.1.4. mysql.server:MySQL服务器启动脚本

在Unix中的MySQL分发版包括mysql.server脚本。它可以用于使用System V-style运行目录来启动和停止系统服务的系统,例如Linux和Solaris。它还用于MySQL的Mac OS X Startup Item。

mysql.server位于MySQL源码树MySQL安装目录下的support-files目录中。

如果你使用Linux 服务器RPM软件包(MySQL-server-VERSION.rpm),mysql.server脚本将安装到/etc/init.d目录下,名为mysql。你不需要 手动安装。关于Linux RPM软件包的详细信息参见2.4节,“在Linux下安装MySQL”

一些卖方提供的RPM软件包安装的启动脚本用其它名,例如mysqld。

如果你从不自动安装mysql.server的源码分发版或二进制分发版格式安装MySQL,也可以手动安装。相关说明参见2.9.2.2节,“自动启动和停止MySQL”

mysql.server从 [mysql.server]和选项文件的[mysqld]部分读取选项。(为了保证向后兼容性,它还读取 [safe_mysqld]部分,尽管在MySQL 5.1安装中你应将这部分重新命名为[mysqld_safe])。

ref: http://dev.mysql.com/doc/refman/5.1/zh/database-administration.html#mysql-server

看了一下確實是script, 學習!

2009年12月14日星期一

mysqlimport and load data 實驗

測試數據:

[root@aaaa ~]# cat empl
title,name,nick,tel
ivan,ivanyao,ivan,123
ivan2,ivanyao2,ivan2,123
ivan3,ivanyao3,ivan3,123
ivan4,ivanyao4,ivan4,123
ivan5,ivanyao5,ivan5,123

建表:

mysql> create table empl(
    -> title varchar(100),
    -> name varchar(100),
    -> nick varchar(100),
    -> tel int);
Query OK, 0 rows affected (0.18 sec)
mysql>
mysql> desc empl
    -> ;
+-------+--------------+------+-----+---------+-------+
| Field | Type         | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| title | varchar(100) | YES  |     | NULL    |       |
| name  | varchar(100) | YES  |     | NULL    |       |
| nick  | varchar(100) | YES  |     | NULL    |       |
| tel   | int(11)      | YES  |     | NULL    |       |
+-------+--------------+------+-----+---------+-------+
4 rows in set (0.02 sec)
mysql>

mysqlimport實驗:

[root@aaaa ~]# mysqlimport --local  --fields-terminated-by=',' 'db' 'empl' db.empl: Records: 6  Deleted: 0  Skipped: 0  Warnings: 0
[root@aaaa ~]# mysqlimport --local  --fields-terminated-by=',' db empl
db.empl: Records: 6  Deleted: 0  Skipped: 0  Warnings: 0

這裏這個--local很重要,不加會報錯,類似:

[root@aaaa ~]# mysqlimport --fields-terminated-by=',' db empl
mysqlimport: Error: File '/var/lib/mysql/db/empl' not found (Errcode: 2), when using table: empl

Load data實驗:

mysql> load data local infile '/root/empl' into table empl fields terminated by ',' ignore 1 lines;
Query OK, 5 rows affected (0.00 sec)
Records: 5  Deleted: 0  Skipped: 0  Warnings: 0

mysql> select * from empl;
+-------+----------+-------+------+
| title | name     | nick  | tel  |
+-------+----------+-------+------+
| ivan  | ivanyao  | ivan  |  123 |
| ivan2 | ivanyao2 | ivan2 |  123 |
| ivan3 | ivanyao3 | ivan3 |  123 |
| ivan4 | ivanyao4 | ivan4 |  123 |
| ivan5 | ivanyao5 | ivan5 |  123 |
| title | name     | nick  |    0 |
| ivan  | ivanyao  | ivan  |  123 |
| ivan2 | ivanyao2 | ivan2 |  123 |
| ivan3 | ivanyao3 | ivan3 |  123 |
| ivan4 | ivanyao4 | ivan4 |  123 |
| ivan5 | ivanyao5 | ivan5 |  123 |
| title | name     | nick  |    0 |
| ivan  | ivanyao  | ivan  |  123 |
| ivan2 | ivanyao2 | ivan2 |  123 |
| ivan3 | ivanyao3 | ivan3 |  123 |
| ivan4 | ivanyao4 | ivan4 |  123 |
| ivan5 | ivanyao5 | ivan5 |  123 |
+-------+----------+-------+------+
17 rows in set (0.00 sec)

記錄一下,備忘!

2009年12月13日星期日

SP2-0618: Cannot find the Session Identifier. 问题

在oracle11g下面做一个实验的时候发现包这个错:

SQL> set autotrace on
SP2-0618: Cannot find the Session Identifier.  Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

google 发现当前用户权限不够

SQL> grant all on plan_table to u2;

Grant succeeded.

 

SQL> grant select any dictionary to u2;

Grant succeeded.

再次执行,ok

SQL> set autotrace on

SQL> set timing on
SQL> select owner, count(*) from my_all_objects group by owner;

OWNER                            COUNT(*)
------------------------------ ----------
WKSYS                                 840
MDSYS                                4896
WK_TEST                                36
U2                                    192
PUBLIC                             160218
CTXSYS                                534
OLAPSYS                              1056
SYSTEM                                 54
EXFSYS                                480
ORDSYS                              12606
ORDPLUGINS                             30

OWNER                            COUNT(*)
------------------------------ ----------
XDB                                  1212
FLOWS_030000                          942
SYS                                139512
WMSYS                                 702

15 rows selected.

Elapsed: 00:00:00.22

Execution Plan
----------------------------------------------------------
Plan hash value: 2509106709

--------------------------------------------------------------------------------
------------------

| Id  | Operation           | Name                       | Rows  | Bytes | Cost
(%CPU)| Time     |

--------------------------------------------------------------------------------
------------------

|   0 | SELECT STATEMENT    |                            |    15 |    90 |  1371
   (3)| 00:00:17 |

|   1 |  RESULT CACHE       | 27vtdg9w24wgcb8d23j5h07m2v |       |       |
      |          |

|   2 |   HASH GROUP BY     |                            |    15 |    90 |  1371
   (3)| 00:00:17 |

|   3 |    TABLE ACCESS FULL| MY_ALL_OBJECTS             |   323K|  1894K|  1348
   (1)| 00:00:17 |

--------------------------------------------------------------------------------
------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=2; dependencies=(U2.MY_ALL_OBJECTS); parameters=(nls); name=
"select owner, count(*) from my_all_objects group by owner"

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       4869  consistent gets
          0  physical reads
        116  redo size
        863  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         15  rows processed

SQL>

看来在oracle 11g中不用再使用($ORACLE_HOME/sqlplus/admin/plustrce.sql)脚本创建 plustrace了

记录一下。

2009年12月9日星期三

web tools httprint

做web一般都比較關注webserver的選型情況, 但是有些server做的比較好屏蔽了server的信息,如何處理呢

一個不錯的工具httprint http://www.net-square.com/httprint/

有windows, linux, mac 版本

基於windows版本的測試截圖:

httprint

注意有時候有些網站關閉了ICMP,這個會影響測試結果

在Options的選項中,將之去掉即可

記錄一下備忘!

參考:http://net-square.com/httprint/httprint_paper.html

2009年12月7日星期一

smbclient RHEL 5 mount cmd變化了

從linux下mount windows下的共享文件:

smbclient在RHEL <=4

這種形式

# mount -t smbfs -o username=test,password=testpass //ntserver/download /mnt/ntserver 

在RHEL 5以後變成了

# mount -t cifs //ntserver/download -o username=test,password=testpass /mnt/ntserver

記錄一下,自動mount修改 /etc/fstab

//ntserver/download /mnt/ntserver smbfs username=test,password=testpass 0 0

//ntserver/download /mnt/ntserver cifs user,uid=500,rw,suid,username=test,password=testpass 0 0


更安全一點的辦法


//winbox/getme /mnt/win cifs user,uid=500,rw,noauto,suid,credentials=/root/secret.txt 0 0


And the /root/secret.txt file looks like this:



username=test
password=testpass


ref:


http://wiki.centos.org/TipsAndTricks/WindowsShares


http://www.cyberciti.biz/tips/how-to-mount-remote-windows-partition-windows-share-under-linux.html


http://www.cyberciti.biz/faq/configure-a-system-to-automount-a-samba-share-with-etcfstab/


記錄一下備忘!

sun ZFS學習筆記

Sun 提供了一個ZFS的免費教程,感覺不錯,講部分內容和自己的實驗記錄下來,備查!

參考:
https://learning.sun.com/solc/files/solc/tblo_selfcontained/1179270410/module1/default.htm

What is ZFS?

ZFS is a revolutionary new file system that fundamentally changes the way file systems are administered, with features and benefits not found in any other file system available today. ZFS has been designed to be robust, scalable, and simple to administer.

Instead of using a storage volume model and its associated limitations, ZFS aggregates storage devices into 'pools.' The storage pool describes the physical characteristics of the system's storage (device layout, data redundancy, and so on) and acts as an general data store from which file systems can be created.

The Benefits of ZFS

ZFS's pooling model has a lot of useful benefits. For example, using pools means that you no longer need to predetermine the size of a file system, as file systems grow automatically within the space allocated to the storage pool.

More Benefits of ZFS

ZFS is also designed to scale easily and can handle extremely large quantities of data. It does this by using 128-bit data addressing and dynamically scaling its metadata.

In addition, ZFS ensures that its data is always consistent on disk. Because ZFS uses checksums with each block of data, it can detect data corruption caused by any element of the storage subsystem, not just disk errors. That means you can use inexpensive disks to provide similar reliability to high-priced storage systems.

Even More Benefits of ZFS

Finally, ZFS provides a greatly simplified administration model. ZFS makes it easy to create and manage file systems without needing multiple commands or editing configuration files.

For example, with ZFS you can easily:

  • Set quotas or reservations
  • Turn compression on or off
  • Manage mount points for numerous file systems with a single command, and
  • Examine or repair devices without having to understand a separate set of volume manager commands

Now you understand why ZFS is such a popular file system!

the two basic elements of a ZFS file system are:

. Pools and datasets

which kind of storage model does ZFS use?

Pooled storage

summary

In summary , ZFS is an amazing new file system technology that is fast, scalable, and filled with useful features.

Understanding ZFS Basics

Now that you have a general understanding ZFS , let's dive into the details. In this topic we'll cover the basic components of the ZFS file systems, such as pools and datasets. Like always, if you think that you already have a good understanding of these basic concepts, you're welcome to jump to Lesson 2.

If you're ready to proceed, click the Next button.

Pools and Datasets

Conceptually, ZFS is pretty simple. There are two basic elements: Pools and datasets.

Pools consist of storage devices that provide space for datasets. Each pool is comprised of one or more virtual devices. A virtual device is an internal representation of the storage pool that describes the layout of physical storage and its fault characteristics. In a system with multiple devices, often these storage devices are grouped in pairs.

Datasets are groups of information that reside in storage space allocated from the pools.

Think of it this way:

  • A pool is like a convention center where a tradeshow is taking place.

  • The storage devices are like convention center rooms that can be opened up to fit the number of people in attendance.

  • The datasets are attendees from different professional backgrounds. For example, one dataset is made up of the marketing people, another of engineers, etc. These attendees (or datasets) can sit whereever they want in the room until it is full. Then another room can be opened up to accommodate more attendees. When the tradeshow is over, the attendees leave the convention center leaving room for others to use the space.

Pools and Datasets (cont)

Now let's take a closer look at datasets. A “dataset” is a generic name for the following ZFS entities: file systems, volumes, snapshots, or clones.

  • A file system dataset is just a directory hierarchy for organizing and storing files.

  • A volume is a dataset that is used to emulate a physical device. For example, ZFS swap and dump volumes are created automatically when the OpenSolaris release is installed.

  • A snapshot is a read-only image of a file system or volume at a given point in time.

  • A clone is a file system whose initial contents are identical to the contents of a snapshot.

Examining Pools and Datasets

In ZFS, you can examine pools and datasets with the corresponding administration commands:

  • “zpool” for pools, and
  • “zfs” for datasets

Using the ZPOOL Command

First, use the “zpool status tank” command to see the available pools.

This system has just one pool called “tank.”

Take a moment to review the “config” section. This section displays the devices that make up the pool. This is a simple pool that contains a single storage device called “c1t0d0.”

---------- sample---

root@opensolaris:~# zpool status
  pool: mypool
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          c9d0      ONLINE       0     0     0
errors: No known data errors
  pool: rpool
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c7d0s0    ONLINE       0     0     0
errors: No known data errors
root@opensolaris:~#
root@opensolaris:~# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c7d0 <DEFAULT cyl 30390 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
1. c8d1 <drive type unknown>
          /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
       2. c9d0 <WDC WD80-  WD-WMAM9SH8120-0001-74.50GB>
          /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
3. c10d0 <drive type unknown>
          /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0
Specify disk (enter its number): ^D

创建tank pool
root@opensolaris:~# zpool create tank c8d1
root@opensolaris:~#
root@opensolaris:~# zpool list tank
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank   696G   106K   696G     0%  ONLINE  -
root@opensolaris:~#

-----------------

Using the ZFS Command

Now let's use the 'zfs list -r tank' command to review the datasets on the root pool, or “rpool.”

First, let's look at the column names in the output display.

The “NAME” column lists the name of the file system.

The “USED” and ”AVAIL” columns display the amount of space has been used used and the amount of space that is still available respectivey.

The REFER column displays the amount of data accessible within the specific file system, and the amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical.

The “MOUNTPOINT” column identifies the directory where the file system resides.

The zfs  command is used to examine:

.datasets

Summary

ZFS is a new file system technology. It uses pools and datasets. Pools include virtual devices and storage devices. Datasets include file systems, volumes, snapshots, and clones. These components can be explored with the zpool and zfs commands.

Ready to move on? Click the exit button to return to the lesson menu and continue to the next lesson.

https://learning.sun.com/solc/files/solc/tblo_selfcontained/1179270410/module2/default.htm

Introduction

Now that you understand the basics of ZFS, let's discuss how to create a ZFS pool. In this lesson, you'll learn how to create a ZFS pool and why its important to use mirrored pools.

Creating a ZFS Pool

ZFS administration has been designed with simplicity in mind. Among the goals of the ZFS design is to reduce the number of commands needed to create a usable file system. When you create a new pool, a new ZFS file system is created and mounted automatically. Let me show you how its done.

Let's start by creating a simple single-device pool named "tank."

Use the "zpool create" command to create the pool by first identifying the pool name “tank” and then the device, c1t0d0 as follows:

Now use the "zpool status tank" command to see the results.

You now have a single-disk storage pool named tank, with a single storage device called c1t0d0.

Although it is possible to create a single-disk pool, a mirrored pool provides data redundancy and better protection again disk failures. We'll cover how to create a mirrored pool in the next section.

To learn more about mirrors, see Lesson 3: Mirrors.

zfs create
zfs destroy
zfs rollback
zfs rename
zfs list
zfs mount
zfs clone
zfs promote

--- sample -----
zpool create tank cit0d0
zpool status tank
root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          c8d1      ONLINE       0     0     0
errors: No known data errors
root@opensolaris:~#
---

Converting a Single-Device Storage Pool to a Mirrored Storage Pool

Mirrored storage pools are recommended over single-disk pools because mirrored pools can more effectively protect your data. After creating a single-disk pool, its is easy to convert it into a mirrored pool. Let's see how its done.

First, we'll use the “zpool status tank” command to see how the pool is configured.

The output shows us that currently there is only one device in the pool.

To convert this single device pool to a mirror, use the “zpool attach” command followed by the pool name, the existing device name, and the new device name.

Now, rerun the “zpool status tank” command to see if you were successful.

In the output, notice that the status listing includes the “scrub” status. When a new device is attached to a mirror, ZFS automatically duplicates all of the existing pool data onto the mirror device. This is called a “resilver.”

----
zpool attach tank c1t0d0 c1t1d0   做一个mirror
zpool status tank
Although its possible to create a single-disk pool, using mirrored pools is : true!
to convert a single-disk pool to a mirrored pool, you add a storage with which of the following commands?
attach

Summary

Creating ZFS pools is fast and simple. Just use the zpool create command and the device name and ZFS does the rest. However, make sure to follow up to convert the single-disk pool to a mirrored pool. More on this topic, see Lesson 3: Mirrors .

Summary

Creating ZFS pools is fast and simple. Just use the zpool create command and the device name and ZFS does the rest. However, make sure to follow up to convert the single-disk pool to a mirrored pool. More on this topic, see Lesson 3: Mirrors .

In a previous lesson, we created a single-device pool called 'tank.' Let's take another look at this pool. Use the “zpool status tank” command to review the pool configuration.

If you need to add more storage to this pool, start with the “zpool attach” command followed by the pool name, an existing device in the pool, and the new device that you want to attach.

Now use the zpool status tank command to see the results.

Review the OpenSolaris output on this page. Notice that both devices are now listed in the pool configuration. Also notice that both devices are now part of a “mirror” as identified in the “config” section. A mirror is a virtual device that stores identical copies of data on two or more disks. See Lesson 3: Mirrors for more information.

--

zpool status tank

zpool attach tank c1t0d0 c1t1d0

root@opensolaris:~# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c7d0 <DEFAULT cyl 30390 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
       1. c8d1 <ST375033-         9QK1MYJ-0001-698.63GB>
          /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
       2. c9d0 <WDC WD80-  WD-WMAM9SH8120-0001-74.50GB>
          /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
       3. c10d0 <drive type unknown>
          /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0
Specify disk (enter its number): ^D
root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          c8d1      ONLINE       0     0     0
errors: No known data errors
root@opensolaris:~# zpool attach tank c8d1 c10d0
root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Mon Dec  7 16:09:21 2009
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8d1    ONLINE       0     0     0
            c10d0   ONLINE       0     0     0  73K resilvered
errors: No known data errors
root@opensolaris:~#

---

to add a device to the storage pool, you use the following ZFS command:

attach

Summary

Adding storage to a ZFS pool is easy. Simply use the “zpool attach” command combined with the pool and device names and ZFS does the rest. Simple, isn't it? Let's move on to the next topic.

Introduction

When administering a system, its often necessary to adjust and reconfigure your setup. Often this includes destroying storage pools.

Destroying a ZFS Pool

Sometimes you make a mistake or need to reconfigure your system. To destroy a pool that is no longer needed, simply use the “zpool destroy” command. Let me show you how its done.

First, use the “zpool status tank” command to review the pools on your system.

Review the image of the OpenSolaris output on this page. It looks like we have one pool named “tank” with two devices. Next, use the “zpool destroy” command plus the name of the pool you want to destroy.

Finally, use the “zpool status tank” command to see the results.

Review the image of the OpenSolaris output once again. Notice that the pool called 'tank' has been removed from the system.

---zpool status tank
zpool destroy tank
zpool status tank
----sample---


root@opensolaris:~# zpool destroy tank
root@opensolaris:~# zpool status tank
cannot open 'tank': no such pool
root@opensolaris:~#
---------------------

ZFS provides a safety feature that allows you to recover a destroyed pool if the devices haven't been reused.

For example, suppose we accidentally destroyed the tank pool. We would simply use the “zpool import -D command” followed by the pool name to reimport the pool.

Then you would use the “zpool list” command to see the results.

Notice that the tank pool has been successfully restored.

---------
zpool import -D tank
zpool status tank
-------------sample ----------------
root@opensolaris:~# zpool import -D tank
root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8d1    ONLINE       0     0     0
            c10d0   ONLINE       0     0     0
errors: No known data errors
root@opensolaris:~#
-------------------------------------
the correct ZFS command to eliminate a pool is:
destroy

Summary

Now you know how to a create a ZFS pool, add storage, and destroy a pool if necessary. You even know how to get it back if you delete it on accident.

https://learning.sun.com/solc/files/solc/tblo_selfcontained/1179270410/module3/default.htm

Traditionally, storage management products use a convention called a “mirror.” A mirror is a virtual device that stores identical copies of data on two or more disks. If any disk in a mirror fails, any other disk in that mirror can provide the same data .

The topics in this lesson will teach you all about using a mirrored ZFS configuration to protect your data. However, you should already know the basic definition of a pool and how to create one. If you've skipped those lessons or are still unsure about them, go back and review.

If you're ready, click Next to continue.

What Is A Mirror?

A “mirror” is a kind of storage pool that keeps identical copies of data on two or more disks.

This redundancy provides basic storage protection because if one device fails, the data is still available from the other device (or devices) in the pool.

What are the Benefits of a Mirrored ZFS Configuration?

The benefit of using a mirrored ZFS configuration is that ZFS can attempt to recover the data in the event of a disk failure.

A ZFS storage pool is really just a tree of blocks. ZFS helps to identify data errors by storing the checksum of each block in its parent block pointer, not in the block itself. Every block in the tree contains the checksums for all its children, so the entire pool is self-validating.

When an error occurs, ZFS is able to automatically checks the other device in the mirror, and if it finds good data, it uses the good copy to repair the bad block.

a mirror is : a system that keeps a copy of the data on each one of the mirrorred devices
ZFS checksum feature allows it to:
check for failed blocks of data

Summary

In this topic you have learned that a mirrored ZFS configuration is a storage pool with built in redundancy. You also learned that a mirrored ZFS configuration is recommended because it can recover data in the event of a disk failure.

Introduction

ZFS uses a process called 'mirroring' to improve data security by creating redundancy in the system. This is done by grouping storage devices in pairs and making sure to that the data on each device is mirrored by the other. This arrangment is called a 'mirrored pool.'

In this lesson you'll learn how to set up a mirrored pool.

Creating a Mirrored ZFS Pool

Suppose want to create a mirrored ZFS storage pool. Here's how you do it.

Simply use the “zpool create” command followed by the pool name and device names. We'll create a pool called “tank” with two attached devices called “c1t0d0” and “c1t1do.”

Now use the “zpool status tank” command to see the results.

Notice that you now have a mirrrored pool named tank with two devices.

--------

zpool create tank mirror c1t0d0 c1t1d0

zpool status tank

Converting a Single-Device Pool to a Mirror

Mirrored storage pools are recommended rather than single-disk pools because mirrored pools can more effectively protect your data. After creating a single-disk pool, its is easy to convert it into a mirrored pool. Let's see how its done.

First, we'll use the “zpool status tank” command to see how the pool is configured.

The output shows us that currently there is only one device in the pool.

To convert this single device pool to a mirror, use the “zpool attach” command plus the pool name, the existing device name, and the name of the device you want to attach.

Now, rerun the “zpool status tank” command to see if you were successful.

In the output, notice that the status listing includes the “scrub” status. When a new device is attached to a mirror, ZFS automatically duplicates all of the existing pool data onto the mirror device. This is called a “resilver.”

---

zpool status tank

zpool attach tank c1t0d0 c1t1d0

zpool status tank

-----sample----

root@opensolaris:~# zpool create tank mirror c8d1 c10d0
root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8d1    ONLINE       0     0     0
            c10d0   ONLINE       0     0     0
errors: No known data errors
root@opensolaris:~#
--------------------

--

to convert a single-device pool to multiple-device pool you use which of the following:

Summary

Creating ZFS pools is fast and simple. Just use the zpool attach command and the device name to convert your single-device pool to a mirrored-pool.

Introduction

While managing your ZFS mirror, you might want to modify the configuration. This could mean replacing one device with another or taking a device offline.

This topic will teach you how to manage these tasks.

Taking a Device Offline

ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read or write data to the device, assuming the condition is only temporary. If the condition is not temporary, it is possible to instruct ZFS to ignore the device by bringing it offline. This is done with the “zpool offline” command.

Again, let's run the “zpool status tank” command to review our mirror configuration.

Suppose you wanted to take the device c1t1d0 offline for maintenance. You would type the following “zpool offline “ command followed by the pool name and then the device name.

Now let's run the “zpool status tank” command again to see what happened.

Notice that in the “Config” section the device is listed as “offline.” Also notice that the mirror's state is listed as “degraded.” This means that the mirror is operating with less than full capacity.

In addition, the “Status” line reports that one of the devices has been taken offline, and the “Action” line provides guidance about how to return the mirror to the normal online state.

zpool status tank

zpool offline tank c1t1d0

------------sample ---------------

root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8d1    ONLINE       0     0     0
            c10d0   ONLINE       0     0     0
errors: No known data errors
root@opensolaris:~# zpool offline tank c10d0
root@opensolaris:~# zpool status tank
  pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        DEGRADED     0     0     0
          mirror    DEGRADED     0     0     0
            c8d1    ONLINE       0     0     0
            c10d0   OFFLINE      0     0     0
errors: No known data errors
root@opensolaris:~#

-----------------------------------

Bringing a Device Online

Once you have finished your maintenance tasks, you can easily bring the device back online. Simply use the 'zpool online” command accompanied by the pool name and the device name.

Then we'll use the “zpool status tank” command to check the results:

As you can see, the device c1t1d0 is back online and part of the mirrored pool.

zpool online tank c1t1d0

---------------sample-----------

root@opensolaris:~#  zpool online tank c10d0
root@opensolaris:~# zpool status tank
  pool: tank
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Mon Dec  7 16:20:44 2009
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c8d1    ONLINE       0     0     0
            c10d0   ONLINE       0     0     0  1.50K resilvered
errors: No known data errors
root@opensolaris:~#

---------------------------------

Replacing One Device With Another

Sometimes you might want to replace one device with another. ZFS allows you to do this with a single operation. Just use the “zpool replace” command.

For example, let's look at our mirror called “tank.” We type in the “zpool status tank” command and review the output.

You'll notice that there are two devices in the pool: c1t0d0 and c1t1d0. Now we'll use the “zpool replace” to replace the device c1t10d0 with a new device named c1t2d0.

During this operation ZFS displays the following output showing that the device is being replaced.

Running the “zpool status tank” command again shows the new configuration.

Notice that the new device has replaced the old device in the mirror.

---
zpool status tank
zpool replace tank c1t1d0 c1t2d0
zpool status tank
It takes two commans in ZFS to replace one drive with another drive.
false

Summary

As you can see, managing ZFS pools is very simple. Just master a few simple commands and you can complete a variety of important tasks.

If you have any questions, please click the Back button and review. Otherwise, click the Exit button to return to the Main Menu and proceed to the next lesson.

不勝其煩,將OpenSolaris的sshd改成允許root用戶登錄了

因為最近學習openSolaris,裝了臺服務器,問題是出於安全考慮,要轉一次才能使用root用戶權限,煩啊!

通過修改/etc/ssh/sshd_config中的

#PermitRootLogin no
PermitRootLogin yes

svcadm restart ssh

再次登錄,ok!

記錄一下。生產環境慎用!

2009年12月4日星期五

opensolaris comstar iscsi 實驗

包檢查:

root@opensolaris:~# pkg info SUNWstmf
          Name: SUNWstmf
       Summary: Sun Common Multiprotocol SCSI Target
      Category: System/Hardware
         State: Installed
     Publisher: opensolaris.org
       Version: 0.5.11
Build Release: 5.11
        Branch: 0.111
Packaging Date: Fri May  8 16:37:12 2009
          Size: 2.28 MB
          FMRI: pkg:/SUNWstmf@0.5.11,5.11-0.111:20090508T163712Z
root@opensolaris:~# pkg info SUNWiscsidm
          Name: SUNWiscsidm
       Summary: Sun iSCSI Data Mover
      Category: System/Hardware
         State: Installed
     Publisher: opensolaris.org
       Version: 0.5.11
Build Release: 5.11
        Branch: 0.111
Packaging Date: Fri May  8 16:10:41 2009
          Size: 711.90 kB
          FMRI: pkg:/SUNWiscsidm@0.5.11,5.11-0.111:20090508T161041Z
root@opensolaris:~#
root@opensolaris:~# pkg info SUNWiscsit
          Name: SUNWiscsit
       Summary: Sun iSCSI COMSTAR Port Provider
      Category: System/Hardware
         State: Installed
     Publisher: opensolaris.org
       Version: 0.5.11
Build Release: 5.11
        Branch: 0.111
Packaging Date: Fri May  8 16:10:47 2009
          Size: 647.71 kB
          FMRI: pkg:/SUNWiscsit@0.5.11,5.11-0.111:20090508T161047Z
root@opensolaris:~#

空間創建:

root@opensolaris:~# zfs create -V 1G mypool/vol
root@opensolaris:~# zpool list mypool
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
mypool    74G   475K  74.0G     0%  ONLINE  -
root@opensolaris:~# zfs list mypool/vol
NAME         USED  AVAIL  REFER  MOUNTPOINT
mypool/vol     1G  71.8G    16K  -
root@opensolaris:~#

安裝 storage-server 和SUNWiscsit

root@opensolaris:~# pkg info SUNWiscsit
          Name: SUNWiscsit
       Summary: Sun iSCSI COMSTAR Port Provider
      Category: System/Hardware
         State: Installed
     Publisher: opensolaris.org
       Version: 0.5.11
Build Release: 5.11
        Branch: 0.111
Packaging Date: Fri May  8 16:10:47 2009
          Size: 647.71 kB
          FMRI: pkg:/SUNWiscsit@0.5.11,5.11-0.111:20090508T161047Z
root@opensolaris:~#
root@opensolaris:~# pkg info storage-server
pkg: no packages matching the following patterns you specified are
installed on the system.  Try specifying -r to query remotely:
        storage-server
root@opensolaris:~#
root@opensolaris:~# pkg install storage-server
DOWNLOAD                                    PKGS       FILES     XFER (MB)
Completed                                  16/16     845/845   28.58/28.58
PHASE                                        ACTIONS
Install Phase                              1657/1657
root@opensolaris:~#

root@opensolaris:~# svcs -a |grep stmf
disabled       16:10:50 svc:/system/stmf:default
root@opensolaris:~# svcadm enable stmf
root@opensolaris:~# svcs -a |grep stmf
online         16:13:25 svc:/system/stmf:default
root@opensolaris:~#

root@opensolaris:~# stmfadm list-state
Operational Status: online
Config Status     : initialized
root@opensolaris:~#

root@opensolaris:/dev/zvol/rdsk/mypool# pwd
/dev/zvol/rdsk/mypool
root@opensolaris:/dev/zvol/rdsk/mypool# ls -al
total 5
drwxr-xr-x 4 root root  4 2009-12-04 16:10 .
drwxr-xr-x 4 root root  4 2009-12-04 11:41 ..
drwxr-xr-x 3 root root  3 2009-12-04 16:10 iscsi
lrwxrwxrwx 1 root root 39 2009-12-04 16:10 vol -> ../../../../devices/pseudo/zfs@0:1c,raw
root@opensolaris:/dev/zvol/rdsk/mypool# sbdadm create-lu /dev/zvol/rdsk/mypool/vol
Created the following LU:
              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f0d9cf880000004b18c5c30001      1073676288       /dev/zvol/rdsk/mypool/vol
root@opensolaris:/dev/zvol/rdsk/mypool#
l
root@opensolaris:/dev/zvol/rdsk/mypool# stmfadm add-view 600144f0d9cf880000004b18c5c30001
root@opensolaris:/dev/zvol/rdsk/mypool#
root@opensolaris:/dev/zvol/rdsk/mypool# stmfadm list-view -l 600144f0d9cf880000004b18c5c30001
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 0
root@opensolaris:/dev/zvol/rdsk/mypool#

root@opensolaris:/dev/zvol/rdsk/mypool# svcadm enable iscsi/target
root@opensolaris:/dev/zvol/rdsk/mypool# svcs iscsi/target
STATE          STIME    FMRI
online         16:21:11 svc:/network/iscsi/target:default
root@opensolaris:/dev/zvol/rdsk/mypool#
root@opensolaris:/dev/zvol/rdsk/mypool# itadm create-target
Target iqn.1986-03.com.sun:02:4a34aae0-f41e-6b82-9af0-eb7e8db6cec1 successfully created
root@opensolaris:/dev/zvol/rdsk/mypool#

測試機是winXP

現在最新的Initiator-2.08-build3825-x86fre.exe

url:http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en

安裝之

Discovery –> add-->opensolaris的IP. default port 3260

Target –> “Log On” 如果發現”Connected”就對了

管理工具--》存儲--》磁盤管理--》發現新增的disk。

剩下的內容就是初始化,格式化,不說了

記錄一下!

基本上smb方式, iSCSI都測試了,就是權限問題了。繼續研究

2009年12月3日星期四

openSolaris ZFS 實現smb共享的實驗

小弟初學openSolaris 下實現smb 共享, 經過幾次失敗終於可以用了,記錄一下:

root@opensolaris:~# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c7d0 <DEFAULT cyl 30390 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
       1. c9d0 <WDC WD80-  WD-WMAM9SH8120-0001-74.50GB>
          /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
Specify disk (enter its number): ^D
root@opensolaris:~#
root@opensolaris:~# zpool create -f mypool c9d0
root@opensolaris:~# zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
mypool    74G  74.5K  74.0G     0%  ONLINE  -
rpool    232G  8.99G   223G     3%  ONLINE  -
root@opensolaris:~#

root@opensolaris:~# zfs create -o casesensitivity=mixed mypool/Demo
root@opensolaris:~# zfs set compression=on mypool/Demo
root@opensolaris:~# zfs set snapdir=visible mypool/Demo
root@opensolaris:~# zfs set sharesmb=name=Demo mypool/Demo
root@opensolaris:~# zfs set quota=1G mypool/Demo
root@opensolaris:~# zfs set reservation=1G mypool/Demo
root@opensolaris:~# zfs get all mypool/Demo |egrep '(NAME|smb|comp|case|quota|reservation)'
NAME         PROPERTY              VALUE                  SOURCE
mypool/Demo  compressratio         1.00x                  -
mypool/Demo  quota                 1G                     local
mypool/Demo  reservation           1G                     local
mypool/Demo  compression           on                     local
mypool/Demo  casesensitivity       mixed                  -
mypool/Demo  sharesmb              name=Demo              local
mypool/Demo  refquota              none                   default
mypool/Demo  refreservation        none                   default
mypool/Demo  usedbyrefreservation  0                      -
root@opensolaris:~#
root@opensolaris:~# useradd  -d /mypool/smbtest smbtest
root@opensolaris:~# passwd smbtest
New Password:
Re-enter new Password:
passwd: password successfully changed for smbtest
root@opensolaris:~#
root@opensolaris:~# smbadm join -w Galaxy
After joining Galaxy the smb service will be restarted automatically.
Would you like to continue? [no]: y
Successfully joined Galaxy
root@opensolaris:~#
root@opensolaris:~# sharemgr show -vp
default nfs=()
zfs
    zfs/mypool/myfs nfs=() nfs:sys=(rw="@172.16.X.X/24" root="@172.16.X.X/24")
          /export/zfs1
    zfs/mypool/Demo smb=()
          Demo=/mypool/Demo
    zfs/mypool/myfs2 smb=()
          mypool_myfs2=/mypool/myfs2
root@opensolaris:~#

這是會出現類似這樣的介面:

root@opensolaris:/mypool/Demo# svcs -a |grep smb
online         12:48:40 svc:/network/smb/client:default
online*         12:48:41 svc:/network/smb/server:default
root@opensolaris:/mypool/Demo#

從remote訪問不行

reboot一次,結果ok,可以使用了。

smbd: kernel bind error: No such file or directory 問題處理

在opensolaris11上配置smb server

root@opensolaris:~# rem_drv smbsrv
root@opensolaris:~# pkg install SUNWsmbskr
DOWNLOAD                                    PKGS       FILES     XFER (MB)
Completed                                    1/1         6/6     0.43/0.43
PHASE                                        ACTIONS
Install Phase                                  20/20
root@opensolaris:~# pkg install SUNWsmbs
DOWNLOAD                                    PKGS       FILES     XFER (MB)
SUNWsmbs                                     0/1        2/29     0.00/1.49
Completed                                    1/1       29/29     1.49/1.49
PHASE                                        ACTIONS
Install Phase                                  70/70
root@opensolaris:~#
root@opensolaris:~# add_drv smbsrv
Driver (smbsrv) is already installed.

root@opensolaris:~# svccfg import /var/svc/manifest/network/smb/server.xml
root@opensolaris:~#

root@opensolaris:~# svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
root@opensolaris:~# svcs -a |grep smb
online         15:40:47 svc:/network/smb/client:default
maintenance    16:38:21 svc:/network/smb/server:default
root@opensolaris:~#

到這裏報錯了,沒有在solaris 上高過smb,不知道哪裏出問題了,怎麼辦?google

原來opensoalris的日誌目錄在

/var/svc/log

root@opensolaris:/var/svc/log# cat network-smb-server\:default.log
[ Dec  3 16:10:52 Disabled. ]
[ Dec  3 16:10:52 Rereading configuration. ]
[ Dec  3 16:36:29 Rereading configuration. ]
[ Dec  3 16:37:44 Rereading configuration. ]
[ Dec  3 16:38:19 Enabled. ]
[ Dec  3 16:38:20 Executing start method ("/usr/lib/smbsrv/smbd start"). ]
smbd: NetBIOS services started
smbd: kernel bind error: No such file or directory
smbd: daemon initialization failed
[ Dec  3 16:38:21 Method "start" exited with status 95. ]
[ Dec  3 16:40:00 Leaving maintenance because disable requested. ]
[ Dec  3 16:40:00 Disabled. ]
[ Dec  3 16:40:17 Enabled. ]
[ Dec  3 16:40:17 Executing start method ("/usr/lib/smbsrv/smbd start"). ]
smbd: NetBIOS services started
smbd: kernel bind error: No such file or directory
smbd: daemon initialization failed
[ Dec  3 16:40:18 Method "start" exited with status 95. ]
root@opensolaris:/var/svc/log#

繼續google之

http://www.opensolaris.org/jive/thread.jspa?threadID=70302&tstart=0中說“A reboot solved the problem.”

reboot機器後,果然

root@opensolaris:~# svcs -a |grep smb
online         17:04:07 svc:/network/smb/client:default
online         17:04:08 svc:/network/smb/server:default

好了,繼續。

2009年12月2日星期三

OpenSolaris下mount iso文件 和安裝包

如何在openSolaris下mount一個iso的鏡像文件呢?

第一步,決定那個設備用於掛接這個文件

root@opensolaris:~/Downloads# lofiadm -a sol-nv-b125-x86-dvd.iso
/dev/lofi/1

root@opensolaris:~/Downloads#

第二步:

root@opensolaris:~/Downloads# mkdir /mnt2
root@opensolaris:~/Downloads# mount -F hsfs /dev/lofi/1 /mnt2

root@opensolaris:~/Downloads# cd /mnt2
root@opensolaris:/mnt2# ls
autorun.inf  Copyright                    License     Solaris_11
autorun.sh   installer                    README.txt  Sun_HPC_ClusterTools
boot         JDS-THIRDPARTYLICENSEREADME  sddtool
root@opensolaris:/mnt2#

成功了,繼續

root@opensolaris:/mnt2# cd Solaris_11/
root@opensolaris:/mnt2/Solaris_11# ls
Docs  Misc  Patches  Product  Tools
root@opensolaris:/mnt2/Solaris_11# cd Product/
root@opensolaris:/mnt2/Solaris_11/Product# cp -rf SUNWjhrt SUNWjhdev SUNWj5dev SUNWj5rt SUNWj6rt SUNWjato  SUNWmconr SUNWmcon SUNWmcos SUNWmcosx SUNWmctag SUNWmfrun SUNWzfsgr SUNWzfsgu /var/spool/pkg

root@opensolaris:~# pkgadd

The following packages are available:
  1  SUNWj5dev     JDK 5.0 Dev. Tools (1.5.0_20)
                   (i386) 1.5.0,REV=2004.12.06.22.53
  2  SUNWj5rt      JDK 5.0 Runtime Env. (1.5.0_20)
                   (i386) 1.5.0,REV=2004.12.06.22.53
  3  SUNWj6rt      JDK 6.0 Runtime Env. (1.6.0_15)
                   (i386) 1.6.0,REV=2006.11.29.05.03
  4  SUNWjato      Java Studio Enterprise Web Application Framework
                   (all) 2.1.5,REV=2006.07.18.09.36
  5  SUNWjhdev     JavaHelp Development Utilities
                   (all) 2.0,REV=2008.10.08
  6  SUNWjhrt      JavaHelp Runtime
                   (all) 2.0,REV=2008.10.08
  7  SUNWmcon      Sun Java(TM) Web Console 3.1 (Core)
                   (i386) 3.1,REV=2008.08.25.16.44
  8  SUNWmconr     Sun Java(TM) Web Console 3.1 (Root)
                   (i386) 3.1,REV=2008.08.25.16.44
  9  SUNWmcos      Implementation of Sun Java(TM) Web Console (3.1) services
                   (i386) 3.1,REV=2008.08.25.16.44
10  SUNWmcosx     Implementation of Sun Java(TM) Web Console (3.1) services
                   (i386) 3.1,REV=2008.08.25.16.44

... 4 more menu choices to follow;
<RETURN> for more choices, <CTRL-D> to stop display:

11  SUNWmctag     Sun Java(TM) Web Console 3.1 (Tags & Components)
                   (i386) 3.1,REV=2008.08.25.16.44
12  SUNWmfrun     Motif RunTime Kit
                   (i386) 2.1.4,REV=10.2009.09.07
13  SUNWzfsgr     ZFS Administration for Sun Java(TM) Web Console (Root)
                   (i386) 1.0,REV=2009.09.16.20.27
14  SUNWzfsgu     ZFS Administration for Sun Java(TM) Web Console (Usr)
                   (i386) 1.0,REV=2009.09.16.20.27

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:

Processing package instance <SUNWj5dev> from </var/spool/pkg>

JDK 5.0 Dev. Tools (1.5.0_20)(i386) 1.5.0,REV=2004.12.06.22.53
Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Using </usr> as the package base directory.
## Processing package information.
## Processing system information.
   2 package pathnames are already properly installed.
## Verifying package dependencies.
WARNING:
    The <SUNWj5rt> package "JDK 5.0 Runtime Env.
    (1.5.0_01)" is a prerequisite package and should be
    installed.
WARNING:
    The <SUNWmfrun> package "Motif RunTime Kit" is a
    prerequisite package and should be installed.

Do you want to continue with the installation of <SUNWj5dev> [y,n,?]

一路y下去了

Installation of <SUNWzfsgu> was successful.

root@opensolaris:~# svccfg
svc:> select system/webconsole
svc:/system/webconsole> setprop options/tcp_listen=true
svc:/system/webconsole> quit

root@opensolaris:~# /usr/sbin/smcwebserver restart
Restarting Sun Java(TM) Web Console Version 3.1 ...
The console is running

root@opensolaris:~# netstat -a|grep 6789
      *.6789               *.*                0      0 49152      0 LISTEN
      *.6789                            *.*                             0      0 49152      0 LISTEN
root@opensolaris:~#

成功了