:wq - blog » hardware http://writequit.org/blog Tu fui, ego eris Mon, 22 Dec 2014 14:54:59 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.5 Recent home project: ZFS NAS server http://writequit.org/blog/2007/07/02/recent-home-project-zfs-nas-server/ http://writequit.org/blog/2007/07/02/recent-home-project-zfs-nas-server/#comments Mon, 02 Jul 2007 22:01:47 +0000 http://writequit.org/blog/?p=67 I apologize for not posting for the last week, it was a very hectic week for myself because of a certain request for a Solaris 9 machine with tape that took the greater part of a week to get working properly. All I have to say about that is that I much prefer Solaris 10 over Solaris 9.

Anyhow, on to the project. Lately I’ve been working on an old Blade 150 that I have at home trying to get it to recognize the IDE controller card and old hard drives attached. Below you can see a picture of what I’m working with:

Blade 150

I have an UltraSPARC II processor in there running at 650Mhz as well as a gig of RAM (hopefully enough for my purposes). I found an extremely old IDE RAID controller card, switched it to JBOD mode and stuck it connected to 2 spare hard drives. At this time the hard drives are each only 40GB and I haven’t figured out a way for them to stay in the case (not enough space in there). I ran the IDE cables through the PCI slot opening and set the drives on top.

One of the problems I ran into was powering the hard drives, in this case the 150 didn’t have enough spare power hookups for 2 additional drives (in addition to the one inside the machine for the OS), so I ended up gutting another machine of mine for the power supply to power only the hard drives. Slightly out of the picture on the left the power supply is sitting with a paper-clip jammed into the motherboard connector to manually switch it to “always on”. Not a very elegant solution, but for the time being it works. Hopefully I’ll be getting a case for the hard drives and additional power supply so it doesn’t look nearly as ugly.

Anyhow, after installing OpenSolaris build 65, the machine booted up and was able to see the additional 2 hard drives, but panicked and rebooted when I actually selected them, upon rebooting they acted alright. I proceeded to create a mirrored zpool in case of drive failure. At this point it’s only 40GB, but I plan on getting some 300-500 GB drives for the data. Eventually I want this to be shared across the network for Delilah and I to store our important documents on (and it will be backed up also). Definitely a very cheap solution for our simple home.

Does anyone out there have a home server running Solaris? What do you use it for? How does it work out?

Thanks to my beautiful wife Delilah for taking the picture while I was at work!

]]>
http://writequit.org/blog/2007/07/02/recent-home-project-zfs-nas-server/feed/ 2
Use SVM to make RAID0 and RAID1 meta-partitions http://writequit.org/blog/2007/05/17/use-svm-to-make-raid0-and-raid1-meta-partitions/ http://writequit.org/blog/2007/05/17/use-svm-to-make-raid0-and-raid1-meta-partitions/#comments Thu, 17 May 2007 18:55:13 +0000 http://writequit.org/blog/?p=50 Firstly, the easy one:

RAID0:
Given 4 slices, each ~5g:

First, need a metadb, I created a 100MB slice on c1t1d0s0 (which I am NOT using for the RAID, entirely separate drive) and ran this command to initiate the database. It is a good idea to mirror the database in a minimum of 3 positions, but that is beyond the scope of this tutorial
metadb -a -f c1t1d0s0

Then, it’s as easy as 1 command to bring multiple drives into one slice/partition with the following command:
metainit d100 1 4 c2t2d0s0 c2t3d0s0 c2t4d0s0 c2t5d0s0
NOTE: I already created slice 0 on each of the drives.

To see the status of your meta-slice:
metastat d100
d100: Concat/Stripe
Size: 40878080 blocks (19 GB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase Reloc
c2t2d0s0 0 No Yes
c2t3d0s0 4096 No Yes
c2t4d0s0 4096 No Yes
c2t5d0s0 4096 No Yes

Device Relocation Information:
Device Reloc Device ID
c2t2d0 Yes id1,sd@n6006048cb0ca0ceeef67fa7a33ce4c94
c2t3d0 Yes id1,sd@n6006048cb275dda20f654d7248d17197
c2t4d0 Yes id1,sd@n6006048c5aa658e3c69370f2bad75bc0
c2t5d0 Yes id1,sd@n6006048cc092136a695a21eeaa948f88

See? Now we’ve got a 19GB slice. Feel free to newfs /dev/md/dsk/d100 and mount it somewhere fun.

Next up: RAID1
This is actually not as hard as it looks. First, make sure you init your database like the first step from above. Then initialize your first meta slice:
metainit d101 1 1 c2t2d0s0

Then, create the mirror for that slice which will become your final RAID1 slice by issuing the following command:
metainit d100 -m d101

Then initialize the other slices in your mirror, in this care there are 3 additional slices:
metainit d102 1 1 c2t3d0s0
metainit d103 1 1 c2t4d0s0
metainit d104 1 1 c2t5d0s0

From there, it’s quite easy to finish it up by attaching the mirrors:
metattach d100 d102
metattach d100 d103
metattach d100 d104

Then, monitor metastat for the sync progress percentage until all the mirrors are sync’d. Finished!
metastat d100

]]>
http://writequit.org/blog/2007/05/17/use-svm-to-make-raid0-and-raid1-meta-partitions/feed/ 0
Getting EMC Celerras to work for iscsi on Solaris 10 http://writequit.org/blog/2007/05/17/getting-emc-celerras-to-work-for-iscsi-on-solaris-10/ http://writequit.org/blog/2007/05/17/getting-emc-celerras-to-work-for-iscsi-on-solaris-10/#comments Thu, 17 May 2007 18:31:12 +0000 http://writequit.org/blog/?p=49 For fun and profit!

Basically, for my own categorization:

1. Celerra-side:
Create filesystems (I am using 4 because I want to stripe across all 4:
nas_fs -n iscsiRAID1_5g -c size=5G pool=clar_r5_performance
nas_fs -n iscsiRAID2_5g -c size=5G pool=clar_r5_performance
nas_fs -n iscsiRAID3_5g -c size=5G pool=clar_r5_performance
nas_fs -n iscsiRAID4_5g -c size=5G pool=clar_r5_performance

Mount filesystems:
server_mount server_2 iscsiRAID1_5g /iscsiRAID1_5g
(repeat for all 4 filesystems)

Create iscsi target:
server_iscsi server_2 -target -alias target_3 -create 1000:np=10.5.140.151
(10.5.140.151 is the datamover IP for this Celerra, “target_3″ is the target name)

Create iscsi LUNs:
server_iscsi server_2 -lun -number 1 -create target_3 -size 5000 -fs iscsiRAID1_5g
server_iscsi server_2 -lun -number 2 -create target_3 -size 5000 -fs iscsiRAID2_5g
server_iscsi server_2 -lun -number 3-create target_3 -size 5000 -fs iscsiRAID3_5g
server_iscsi server_2 -lun -number 4 -create target_3 -size 5000 -fs iscsiRAID4_5g

I am creating 4 luns, 1 for each of the 4 filesystems

2. On the Sun side:
iscsiadm modify discovery --sendtargets enable
iscsiadm add discovery-address 10.5.140.151:3260

(10.5.140.151 is the datamover for our Celerra, it will be our iscsi target)

Run this command so you can get the initiator node name:
iscsiadm list initiator-node
It’ll spit out something that looks like this:
Initiator node name: iqn.1986-03.com.sun:01:ba88a3f5ffff.4648d8d8
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
Configured Sessions: 1

We’re interested in the bold part up there, the part that starts with iqn.blahblahblah

Back on the Celerra:
server_iscsi server_2 -mask -set target_3 -initiator iqn.1986-03.com.sun:01:ba88a3f5ffff.4648d8d8 -grant 1-4
(use the initiator you got from the previous command, we are granting access to LUNs 1 through 4 (our raid LUNs))
And start the iscsi service if it hasn’t been started already:
server_iscsi server_2 -service -start
You are now completely done on the Celerra side, you can log off.

Back on the Sun:
Run this command to make sure you can see your targets alright
iscsiadm list target
Target: iqn.1992-05.com.emc:apm000650039080000-3
Alias: target_3
TPGT: 1000
ISID: 4000002a0000
Connections: 1

You should see something similar to the above. If you do, you now have a successful connection to the Celerra for iscsi. Don’t forget to create device nodes for your drives by running this:
devfsadm -i iscsi
Now run “format” and you should be able to see your drives show up. Don’t forget to open port 3260 in your firewall so that iscsi traffic can get through.

You should now be in business with your 4 drives. I’m still working on the RAID/mirror/striping part. I will add another post once I figure this out.

If you run into an error where the iscsi driver will not online, take a look at this link.

]]>
http://writequit.org/blog/2007/05/17/getting-emc-celerras-to-work-for-iscsi-on-solaris-10/feed/ 2