:wq - blog » software http://writequit.org/blog Tu fui, ego eris Mon, 22 Dec 2014 14:54:59 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.5 First extra package available for Hex 1.0.2! (honeysnap-1.0.6.11) http://writequit.org/blog/2007/11/23/first-extra-package-available-for-hex-102-honeysnap-10611/ http://writequit.org/blog/2007/11/23/first-extra-package-available-for-hex-102-honeysnap-10611/#comments Fri, 23 Nov 2007 20:27:18 +0000 http://writequit.org/blog/?p=91 The first addon packages are now available for Hex (version 1.0.1 or 1.0.2)! I have successfully created a FreeBSD port and a Hex package for the honeysnap project. You can find the files here (navi.eight7.org) until they are put into an official hex repository.

If you only want the port, download the honeysnap-1.0.6.11.tar.gz file (note that this file will require a full /usr/ports tree in order to build any dependencies, make sure you fetch the ports on a Hex install before trying to build from source). Untar the file (I usually put it in /usr/ports/security/honeysnap), enter the directory and issue the following command:

sudo make install

It should automatically build all the dependencies and install honeysnap for you.

If you want a faster way, download the honeysnap-1.0.6.11.tbz package and it’s dependency the py25-setuptools-0.6c7_1.tbz package into the same directory and issue the following:

sudo pkg_add -v ./honeysnap-1.0.6.11.tbz

The setuptools package will automatically be installed as a dependency.

After installation, you should be able to type “honeysnap” and get all the command-line options, happy honeysnap-ing!

As always, if you have any questions or problems, feel free to email me or leave a comment!

P.S. Forgot to mention, the package above will only work for Hex 1.0.*, however, the port (the honeysnap-1.0.6.11.tar.gz file) will work on both Hex 1.0.* and FreeBSD 6.* without a problem. Hopefully I’ll be submitting it to the FreeBSD team for review soon to have it included in the standard ports :)

]]>
http://writequit.org/blog/2007/11/23/first-extra-package-available-for-hex-102-honeysnap-10611/feed/ 1
Frustrating: Kernel panics http://writequit.org/blog/2007/06/08/frustrating-kernel-panics/ http://writequit.org/blog/2007/06/08/frustrating-kernel-panics/#comments Fri, 08 Jun 2007 17:03:18 +0000 http://writequit.org/blog/?p=58 Alright, so for the last 3 days or so, my main Solaris machine has been going crazy and kernel panicing about once every day or so, which is extremely annoying because every time it panics the machine reboots (and this machine has 3 zones that are in current use, so I get 3 calls about “why did my machine reboot”). Luckily, none of our servers here are production, so I get calls from development and not angry customers. So, I’m setting out to try and figure out why the machine is panicing. Here’s what I’m getting from the logs:

From the vmcore file:
ZFS: I/O failure (write on <unknown> off 0: zio 6000620cd40 [L0 ZIL intent log] 1000L/1000P DVA[0]=<0:1300cb9000:1000> zilog uncompressed BE contiguous birth=208621 fill=0 cksum=8eafa7df8b7cb3e:f2fd0

From the /var/adm/messages file:
Jun 5 12:01:11 lava2051 fctl: [ID 517869 kern.warning] WARNING: fp(0)::GPN_ID for D_ID=650700 failed
Jun 5 12:01:11 lava2051 fctl: [ID 517869 kern.warning] WARNING: fp(0)::N_x Port with D_ID=650700, PWWN=5006016841e019a7 disappeared from fabric
Jun 5 12:01:30 lava2051 scsi: [ID 243001 kern.info] /pci@1c,600000/fibre-channel@1/fp@0,0 (fcp0):
Jun 5 12:01:30 lava2051 offlining lun=0 (trace=0), target=650700 (trace=2800004)
Jun 5 12:06:28 lava2051 unix: [ID 836849 kern.notice]
Jun 5 12:06:28 lava2051 ^Mpanic[cpu2]/thread=2a101061cc0:
Jun 5 12:06:28 lava2051 unix: [ID 809409 kern.notice] ZFS: I/O failure (write on <unknown> off 0: zio 6000620cd40 [L0 ZIL intent log] 1000L/1000P DVA[0]=<0:1300cb9000:1000> zilog uncompressed BE contiguous birth=208621 fill=0 cksum=8eafa7df8b7cb3e:f2fd0a04af0e949e:1a:f3): error 5)
... some stuff ...
Jun 5 12:09:55 lava2051 savecore: [ID 570001 auth.error] reboot after panic: ZFS: I/O failure (write on <unknown> of
f 0: zio 6000620cd40 [L0 ZIL intent log] 1000L/1000P DVA[0]=<0:1300cb9000:1000> zilog uncompressed BE contiguous birth=208621 fill=0 cksum=8eafa7df8b7cb3e:f2fd0
Jun 5 12:09:55 lava2051 savecore: [ID 748169 auth.error] saving system crash dump in /var/crash/lava2051/*.1

Repeat x 3 so far. Like I said, extremely annoying.

Here’s what I think the problem is so far: I have a 500g ZFS pool built on a single Clariion LUN that is exported to this machine. From the looks of it the machine is having trouble seeing the LUN all the time, when it disappears ZFS freaks out and panics because of a I/O failure. Now that I know what the problem is, I have no idea how to make the LUN stop disappearing. Guess I’m off to check some Clariion logs and see where that gets me. Anyone out there have any other suggestions on how I could go about fixing this problem? I have little experience in working with core dumps. I would be extremely grateful :)

P.S. Yes, I know I should have mirrored the ZFS pool on 2 or more devices in case of a problem like this. This is more my “proof-of-concept” machine where I try out new things and see how developers/QA react to them.

UPDATE:
It looks like the problem was a problem on the Clariion side, for the meantime, we exported a LUN from a different clariion, did a zfs attach, waited for the data to be mirrored and then detached the old one. Fixed! <3 ZFS

UPDATE 2:
Now the data is mirrored to a different Clariion. fun fun. Interestingly enough, EMC doesn’t officially support ZFS on Clariion, only on Symmetrix.

]]>
http://writequit.org/blog/2007/06/08/frustrating-kernel-panics/feed/ 0
Not-as-simple perl script for ZFS snapshot auditing http://writequit.org/blog/2007/06/05/not-as-simple-perl-script-for-zfs-snapshot-auditing/ http://writequit.org/blog/2007/06/05/not-as-simple-perl-script-for-zfs-snapshot-auditing/#comments Tue, 05 Jun 2007 21:45:40 +0000 http://writequit.org/blog/?p=55 Hi everyone, I’m back again with another perl script to hopefully be useful to a few of you.

Firstly, the script: http://lee.hinmanphoto.com/files/zdiff.txt (formatting long scripts in wordpress’ crazy editor is a very long and arduous process, thus I’m just linking to the script in this case, if anyone knows of a better place to stick it let me know). chmod +x it and away you go!

Edit: Sun was nice enough to host the file for me, here’s a link to their version in case the other one goes down: http://www.sun.com/bigadmin/scripts/submittedScripts/zdiff.txt

In a nutshell, here’s what it does:

  • Allows you to diff a file inside a ZFS snapshot with the current file in the filesystem and (optionally) print out the line differences
  • Recursively diff an entire snapshot using md5 sums and (optionally) printing out the line differences
  • Display the md5 sums for each file in a ZFS snapshot and filesystem (this can get old to look at very quickly)

Basically, that doesn’t mean a whole lot, here’s the output from the -h option:

ZFS Snapshot diff
./zdiff.pl [-dhirv] <zfs shapshot name> [filename]

-d Display the lines that are different (diff output)
-h Display this usage
-i Ignore files that don't exist in the snapshot (only necessary for recursing)
-r Recursively diff every file in the snapshot (filename not required)
-v Verbose mode

[filename] is the filename RELATIVE to the ZFS snapshot root. For example, if
I had a filesystem snapshot called pool/data/zone@initial. The filename '/etc/passwd'
would refer to the filename /pool/data/zone/etc/passwd in the filesystem and filename
/pool/data/zone/.zfs/snapshot/initial/etc/passwd in the snapshot.

A couple of examples:
./zdiff.pl -v -r -i pool/zones/lava2019@Fri
Checks the current pool/zones/lava2019 filesystem against the snapshot
returning the md5sum difference of any files (ignore files that don't
exist in the snapshot). With verbose mode

./zdiff.pl -d pool/zones/lava2019@Mon /root/etc/passwd
Check the md5sum for /pool/zones/lava2019/root/etc/passwd and compare
it to /pool/zones/lava2019/.zfs/snapshot/Mon/root/etc/passwd. Display
the lines that are different also.

Here’s what the output is going to look like:

-bash-3.00# ./zdiff.pl -d -v -r -i pool/zones/lava2019@Fri
Recursive diff on pool/zones/lava2019@Fri
Filesystem: /pool/zones/lava2019, Snapshot: Fri
Comparing: /pool/zones/lava2019/
to: /pool/zones/lava2019/.zfs/snapshot/Fri/
** /pool/zones/lava2019/root/etc/shadow is different
** MD5(/pool/zones/lava2019/root/etc/shadow)= 04fa68e7f9dbc0afbf8950bbb84650a6
** MD5(/pool/zones/lava2019/.zfs/snapshot/Fri/root/etc/shadow)= 4fc845ff7729e804806d8129852fa494
17d16
< tom:*LK*:::::::
** /pool/zones/lava2019/root/etc/dfs/dfstab is different
** MD5(/pool/zones/lava2019/root/etc/dfs/dfstab)= 8426d34aa7aae5a512a0c576ca2977b7
** MD5(/pool/zones/lava2019/.zfs/snapshot/Fri/root/etc/dfs/dfstab)= c3803f151cb3018f77f42226f699ee1b
13d12
< share -F nfs -o rw -d "Data" /data

etc, etc, etc.

I am planning on using it so I can audit certain files on different zones (like /etc/passwd) against an initial ZFS snapshot to see what’s changed. Nice little way to keep track of stuff. Email me with any bugs. Matthew dot hinman at gmail dot com.

]]>
http://writequit.org/blog/2007/06/05/not-as-simple-perl-script-for-zfs-snapshot-auditing/feed/ 5
Solaris firewall configuration http://writequit.org/blog/2007/05/17/solaris-firewall-configuration/ http://writequit.org/blog/2007/05/17/solaris-firewall-configuration/#comments Thu, 17 May 2007 21:29:58 +0000 http://writequit.org/blog/?p=52 #
# IP Filter rules to be loaded during startup
#
# See ipf(4) manpage for more information on
# IP Filter rules syntax.
# Block evil packets
block in log quick all with short

# Allow everything from our DNS servers in
pass in quick from 128.222.228.235/32 to any keep state
pass in quick from 128.222.228.236/32 to any keep state
pass in quick from 128.222.12.10/32 to any keep state
pass in quick from 10.5.140.176/32 to any keep state

# Let our iscsi traffic in
pass in quick from any to any port = 3260 keep state
pass in quick from 10.5.140.151/32 to any keep state

# Allow SSH access in
pass in quick proto tcp from any to any port = 22 keep state

# Allow and log icmp packets
pass in log quick proto icmp all keep state

# Allow access to the rest of the world
pass out quick from any to any keep state

# Explicitly block telnet and everything else
block in quick proto tcp from any to any port = 23
block in quick from any to any

Yep, pretty basic. I have to say, I think I might actually like ipfilter better than iptables. Maybe that’s only because I’ve only done basic stuff with it so far.

]]>
http://writequit.org/blog/2007/05/17/solaris-firewall-configuration/feed/ 0
Linux firewall configuration http://writequit.org/blog/2007/05/17/linux-firewall-configuration/ http://writequit.org/blog/2007/05/17/linux-firewall-configuration/#comments Thu, 17 May 2007 20:59:25 +0000 http://writequit.org/blog/?p=51 Basic iptables firewall conf only letting ssh and DNS through:

# Generated by iptables-save v1.2.11 on Thu May 17 14:52:04 2007
*filter
:INPUT DROP [13164:946396]
:FORWARD ACCEPT [0:0]
:OUTPUT DROP [0:0]
-A INPUT -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s 128.222.228.235 -p tcp -j ACCEPT
-A INPUT -s 128.222.228.235 -p udp -j ACCEPT
-A INPUT -s 128.222.228.236 -p tcp -j ACCEPT
-A INPUT -s 128.222.228.236 -p udp -j ACCEPT
-A INPUT -s 128.222.12.10 -p tcp -j ACCEPT
-A INPUT -s 128.222.12.10 -p udp -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A OUTPUT -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -j ACCEPT
-A OUTPUT -p udp -j ACCEPT
-A OUTPUT -p icmp -j ACCEPT
COMMIT
# Completed on Thu May 17 14:52:04 2007

(128.222.228.235/236 and 128.221.12.10 are our DNS servers, I also accept pings too because I’m nice like that and people around here tend to freak out if they can’t ping their machine. I also let anything out, easy to comment out to deny outbound traffic.)

]]>
http://writequit.org/blog/2007/05/17/linux-firewall-configuration/feed/ 0
Use SVM to make RAID0 and RAID1 meta-partitions http://writequit.org/blog/2007/05/17/use-svm-to-make-raid0-and-raid1-meta-partitions/ http://writequit.org/blog/2007/05/17/use-svm-to-make-raid0-and-raid1-meta-partitions/#comments Thu, 17 May 2007 18:55:13 +0000 http://writequit.org/blog/?p=50 Firstly, the easy one:

RAID0:
Given 4 slices, each ~5g:

First, need a metadb, I created a 100MB slice on c1t1d0s0 (which I am NOT using for the RAID, entirely separate drive) and ran this command to initiate the database. It is a good idea to mirror the database in a minimum of 3 positions, but that is beyond the scope of this tutorial
metadb -a -f c1t1d0s0

Then, it’s as easy as 1 command to bring multiple drives into one slice/partition with the following command:
metainit d100 1 4 c2t2d0s0 c2t3d0s0 c2t4d0s0 c2t5d0s0
NOTE: I already created slice 0 on each of the drives.

To see the status of your meta-slice:
metastat d100
d100: Concat/Stripe
Size: 40878080 blocks (19 GB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase Reloc
c2t2d0s0 0 No Yes
c2t3d0s0 4096 No Yes
c2t4d0s0 4096 No Yes
c2t5d0s0 4096 No Yes

Device Relocation Information:
Device Reloc Device ID
c2t2d0 Yes id1,sd@n6006048cb0ca0ceeef67fa7a33ce4c94
c2t3d0 Yes id1,sd@n6006048cb275dda20f654d7248d17197
c2t4d0 Yes id1,sd@n6006048c5aa658e3c69370f2bad75bc0
c2t5d0 Yes id1,sd@n6006048cc092136a695a21eeaa948f88

See? Now we’ve got a 19GB slice. Feel free to newfs /dev/md/dsk/d100 and mount it somewhere fun.

Next up: RAID1
This is actually not as hard as it looks. First, make sure you init your database like the first step from above. Then initialize your first meta slice:
metainit d101 1 1 c2t2d0s0

Then, create the mirror for that slice which will become your final RAID1 slice by issuing the following command:
metainit d100 -m d101

Then initialize the other slices in your mirror, in this care there are 3 additional slices:
metainit d102 1 1 c2t3d0s0
metainit d103 1 1 c2t4d0s0
metainit d104 1 1 c2t5d0s0

From there, it’s quite easy to finish it up by attaching the mirrors:
metattach d100 d102
metattach d100 d103
metattach d100 d104

Then, monitor metastat for the sync progress percentage until all the mirrors are sync’d. Finished!
metastat d100

]]>
http://writequit.org/blog/2007/05/17/use-svm-to-make-raid0-and-raid1-meta-partitions/feed/ 0
Getting EMC Celerras to work for iscsi on Solaris 10 http://writequit.org/blog/2007/05/17/getting-emc-celerras-to-work-for-iscsi-on-solaris-10/ http://writequit.org/blog/2007/05/17/getting-emc-celerras-to-work-for-iscsi-on-solaris-10/#comments Thu, 17 May 2007 18:31:12 +0000 http://writequit.org/blog/?p=49 For fun and profit!

Basically, for my own categorization:

1. Celerra-side:
Create filesystems (I am using 4 because I want to stripe across all 4:
nas_fs -n iscsiRAID1_5g -c size=5G pool=clar_r5_performance
nas_fs -n iscsiRAID2_5g -c size=5G pool=clar_r5_performance
nas_fs -n iscsiRAID3_5g -c size=5G pool=clar_r5_performance
nas_fs -n iscsiRAID4_5g -c size=5G pool=clar_r5_performance

Mount filesystems:
server_mount server_2 iscsiRAID1_5g /iscsiRAID1_5g
(repeat for all 4 filesystems)

Create iscsi target:
server_iscsi server_2 -target -alias target_3 -create 1000:np=10.5.140.151
(10.5.140.151 is the datamover IP for this Celerra, “target_3″ is the target name)

Create iscsi LUNs:
server_iscsi server_2 -lun -number 1 -create target_3 -size 5000 -fs iscsiRAID1_5g
server_iscsi server_2 -lun -number 2 -create target_3 -size 5000 -fs iscsiRAID2_5g
server_iscsi server_2 -lun -number 3-create target_3 -size 5000 -fs iscsiRAID3_5g
server_iscsi server_2 -lun -number 4 -create target_3 -size 5000 -fs iscsiRAID4_5g

I am creating 4 luns, 1 for each of the 4 filesystems

2. On the Sun side:
iscsiadm modify discovery --sendtargets enable
iscsiadm add discovery-address 10.5.140.151:3260

(10.5.140.151 is the datamover for our Celerra, it will be our iscsi target)

Run this command so you can get the initiator node name:
iscsiadm list initiator-node
It’ll spit out something that looks like this:
Initiator node name: iqn.1986-03.com.sun:01:ba88a3f5ffff.4648d8d8
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
Configured Sessions: 1

We’re interested in the bold part up there, the part that starts with iqn.blahblahblah

Back on the Celerra:
server_iscsi server_2 -mask -set target_3 -initiator iqn.1986-03.com.sun:01:ba88a3f5ffff.4648d8d8 -grant 1-4
(use the initiator you got from the previous command, we are granting access to LUNs 1 through 4 (our raid LUNs))
And start the iscsi service if it hasn’t been started already:
server_iscsi server_2 -service -start
You are now completely done on the Celerra side, you can log off.

Back on the Sun:
Run this command to make sure you can see your targets alright
iscsiadm list target
Target: iqn.1992-05.com.emc:apm000650039080000-3
Alias: target_3
TPGT: 1000
ISID: 4000002a0000
Connections: 1

You should see something similar to the above. If you do, you now have a successful connection to the Celerra for iscsi. Don’t forget to create device nodes for your drives by running this:
devfsadm -i iscsi
Now run “format” and you should be able to see your drives show up. Don’t forget to open port 3260 in your firewall so that iscsi traffic can get through.

You should now be in business with your 4 drives. I’m still working on the RAID/mirror/striping part. I will add another post once I figure this out.

If you run into an error where the iscsi driver will not online, take a look at this link.

]]>
http://writequit.org/blog/2007/05/17/getting-emc-celerras-to-work-for-iscsi-on-solaris-10/feed/ 2