tag:blogger.com,1999:blog-55217358361719388462024-03-13T09:03:55.149-05:00FlakRat's Rat HouseBlogging about Linux, VMware, KVM, Rocks Clusters, HPC, Red Hat, Fedora, Ubuntu and other tech stuff that interests meAnonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.comBlogger71125tag:blogger.com,1999:blog-5521735836171938846.post-61746187663407825112016-12-01T22:22:00.000-06:002016-12-01T22:22:18.462-06:00HowTo: Restore a GridScaler GPFS Client Node after Reinstalling the NodeI ran into this issue after reinstalling several compute nodes on our cluster shortly after bringing our new DDN GridScaler GPFS storage cluster online.
<br />
<pre class="source-code"><code>$ sudo mmstartup -N c0040
Fri Dec 2 03:36:03 UTC 2016: mmstartup: Starting GPFS ...
c0040: mmremote: determineMode: Missing file /var/mmfs/gen/mmsdrfs.
c0040: mmremote: This node does not belong to a GPFS cluster.
mmstartup: Command failed. Examine previous error messages to determine cause.</code></pre>
<br />
One method I discovered online was to take the affected node off of the network (or reboot it), remove it from the GPFS cluster, once it's back on the network (or fully rebooted), add it back, license it and start it.
<br /><br />
Later I was introduced to the <b>mmsdrrestore</b> command (portion of the man file below:
<pre class="source-code"><code>mmsdrrestore command
Restores the latest GPFS system files on the specified nodes.
Synopsis
mmsdrrestore [-p NodeName] [-F mmsdrfsFile] [-R remoteFileCopyCommand]
[-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmsdrrestore command is intended for use by experienced
system administrators.
Use the mmsdrrestore command to restore the latest GPFS
system files on the specified nodes. If no nodes are specified,
the command restores the configuration information only on the
node on which is it run. If the local GPFS configuration file is
missing, the file that is specified with the -F option from
the node that is specified with the -p option is used
instead. This command works best when used with the
mmsdrbackup user exit. See the following IBM Spectrum
Scale: Administration and Programming Reference topic:
mmsdrbackup user exit.
...
</code></pre>
<br />
Here's an example of using the command to restore the configuration to node c0040 using primary server gs0 (i.e. one of the NSD servers)
<pre class="source-code"><code>$ sudo mmsdrrestore -p gs0 -N c0040
Fri Dec 2 03:47:06 UTC 2016: mmsdrrestore: Processing node gs0
Fri Dec 2 03:47:08 UTC 2016: mmsdrrestore: Processing node c0040
mmsdrrestore: Command successfully completed
</code></pre>
<br />
Finally, start GPFS on the client (which also mounts the file system(s) if configured to do so
<pre class="source-code"><code>$ sudo mmstartup -N c0040
</code></pre>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-7054389644115177062016-08-22T21:44:00.010-05:002016-08-22T22:31:13.263-05:00Dell OMSA 8.3 on CentOS 7.2 "Error! Chassis info setting unavailable on this system."After installing Dell OMSA 8.3 on a new PowerEdge 730xd running CentOS 7.2 x86_64, the omreport chassis info command reports the following (after starting the services):
<br />
<pre class="source-code"><code>
# omreport chassis info
Error! Chassis info setting unavailable on this system.
</code></pre>
First, the solution (Zurd on the mailing list pointed me here: <a href="http://lists.us.dell.com/pipermail/linux-poweredge/2016-August/050692.html">http://lists.us.dell.com/pipermail/linux-poweredge/2016-August/050692.html</a>) followed by the full ticket I sent to the Dell linux-poweredge mailing list.
<br />
<br />
The solution for CentOS users (and possibly other non-supported distros) is to stop the services, make the following change, then restart the services<br />
<pre class="source-code"><code>--- /opt/dell/srvadmin/etc/srvadmin-storage/stsvc.ini.orig 2016-08-22 21:28:32.079580254 -0500
+++ /opt/dell/srvadmin/etc/srvadmin-storage/stsvc.ini 2016-08-22 21:20:32.374317823 -0500
@@ -116,7 +116,7 @@
vil4=dsm_sm_sasvil
vil5=dsm_sm_sasenclvil
vil6=dsm_sm_swrvil
-vil7=dsm_sm_psrvil
+; vil7=dsm_sm_psrvil
vil8=dsm_sm_rnavil
[SSDSmartInterval]</code></pre>
Now on to the full details of the issue:
<br />
<ol>
<li>Install Dell OMSA 8.3</li>
<pre class="source-code"><code># wget -q -O - http://linux.dell.com/repo/hardware/dsu/bootstrap.cgi | bash
# yum clean all
# yum -y install kernel-devel kernel-headers gcc dell-system-update
# yum -y install srvadmin-all</code></pre>
<li>Next check the status of the services (not started)
<pre class="source-code"><code># srvadmin-services.sh status
dell_rbu (module) is stopped
ipmi driver is running
dsm_sa_datamgrd is stopped
dsm_sa_eventmgrd is stopped
dsm_sa_snmpd is stopped
● dsm_om_shrsvc.service - LSB: DSM OM Shared Services
Loaded: loaded (/etc/rc.d/init.d/dsm_om_shrsvc)
Active: inactive (dead)
Docs: man:systemd-sysv-generator(8)
Aug 22 20:43:08 r730xd-srv01.local systemd[1]: Starting LSB: DSM OM Shared Services...
Aug 22 20:43:08 r730xd-srv01.local dsm_om_shrsvc[5144]: [47B blob data]
Aug 22 20:43:08 r730xd-srv01.local systemd[1]: Started LSB: DSM OM Shared Services.
Aug 22 20:43:08 r730xd-srv01.local dsm_om_shrsvc[5144]: tput: No value for $TERM and no -T specified
Aug 22 20:45:55 r730xd-srv01.local systemd[1]: Stopping LSB: DSM OM Shared Services...
Aug 22 20:45:55 r730xd-srv01.local dsm_om_shrsvc[8804]: [52B blob data]
Aug 22 20:45:55 r730xd-srv01.local systemd[1]: Stopped LSB: DSM OM Shared Services.
Aug 22 20:46:28 r730xd-srv01.local systemd[1]: Stopped LSB: DSM OM Shared Services.
● dsm_om_connsvc.service - LSB: DSM OM Connection Service
Loaded: loaded (/etc/rc.d/init.d/dsm_om_connsvc)
Active: inactive (dead)
Docs: man:systemd-sysv-generator(8)
Aug 22 20:43:08 r730xd-srv01.local systemd[1]: Starting LSB: DSM OM Connection Service...
Aug 22 20:43:08 r730xd-srv01.local dsm_om_connsvc[5145]: [50B blob data]
Aug 22 20:43:08 r730xd-srv01.local systemd[1]: Started LSB: DSM OM Connection Service.
Aug 22 20:45:55 r730xd-srv01.local systemd[1]: Stopping LSB: DSM OM Connection Service...
Aug 22 20:46:02 r730xd-srv01.local dsm_om_connsvc[8844]: [55B blob data]
Aug 22 20:46:02 r730xd-srv01.local systemd[1]: Stopped LSB: DSM OM Connection Service.
Aug 22 20:46:29 r730xd-srv01.local systemd[1]: Stopped LSB: DSM OM Connection Service.</code></pre>
</li>
<li>Start the services
<pre class="source-code"><code># srvadmin-services.sh start
Starting instsvcdrv (via systemctl): [ OK ]
Starting dataeng (via systemctl): [ OK ]
Starting dsm_om_shrsvc (via systemctl): [ OK ]
Starting dsm_om_connsvc (via systemctl): [ OK ]</code></pre>
</li>
<li>Try running the chassis info command
<pre class="source-code"><code># omreport chassis info
Error! Chassis info setting unavailable on this system.
# omreport about
Product name : Dell OpenManage Server Administrator
Version : 8.3.0
Copyright : Copyright (C) Dell Inc. 1995-2015 All rights reserved.
Company : Dell Inc.</code></pre>
</li>
<li>The following are the rpms installed via yum
<pre class="source-code"><code># rpm -qa | grep srvadmin
srvadmin-xmlsup-8.3.0-1908.9058.el7.x86_64
srvadmin-omacore-8.3.0-1908.9058.el7.x86_64
srvadmin-server-snmp-8.3.0-1908.9058.el7.x86_64
srvadmin-oslog-8.3.0-1908.9058.el7.x86_64
srvadmin-idrac-vmcli-8.3.0-1908.9058.el7.x86_64
srvadmin-storageservices-snmp-8.3.0-1908.9058.el7.x86_64
srvadmin-smcommon-8.3.0-1908.9058.el7.x86_64
srvadmin-omcommon-8.3.0-1908.9058.el7.x86_64
srvadmin-smweb-8.3.0-1908.9058.el7.x86_64
srvadmin-racsvc-8.3.0-1908.9058.el7.x86_64
srvadmin-nvme-8.3.0-1908.9058.el7.x86_64
srvadmin-storage-cli-8.3.0-1908.9058.el7.x86_64
srvadmin-storageservices-8.3.0-1908.9058.el7.x86_64
srvadmin-omilcore-8.3.0-1908.9058.el7.x86_64
srvadmin-racadm4-8.3.0-1908.9058.el7.x86_64
srvadmin-isvc-8.3.0-1908.9058.el7.x86_64
srvadmin-argtable2-8.3.0-1908.9058.el7.x86_64
srvadmin-racadm5-8.3.0-1908.9058.el7.x86_64
srvadmin-cm-8.3.0-1908.9058.el7.x86_64
srvadmin-isvc-snmp-8.3.0-1908.9058.el7.x86_64
srvadmin-rac4-populator-8.3.0-1908.9058.el7.x86_64
srvadmin-tomcat-8.3.0-1908.9058.el7.x86_64
srvadmin-itunnelprovider-8.3.0-1908.9058.el7.x86_64
srvadmin-storelib-sysfs-8.3.0-1908.9058.el7.x86_64
srvadmin-storageservices-cli-8.3.0-1908.9058.el7.x86_64
srvadmin-deng-8.3.0-1908.9058.el7.x86_64
srvadmin-rac-components-8.3.0-1908.9058.el7.x86_64
srvadmin-ominst-8.3.0-1908.9058.el7.x86_64
srvadmin-sysfsutils-8.3.0-1908.9058.el7.x86_64
srvadmin-rac5-8.3.0-1908.9058.el7.x86_64
srvadmin-base-8.3.0-1908.9058.el7.x86_64
srvadmin-idrac-ivmcli-8.3.0-1908.9058.el7.x86_64
srvadmin-rac4-8.3.0-1908.9058.el7.x86_64
srvadmin-webserver-8.3.0-1908.9058.el7.x86_64
srvadmin-standardAgent-8.3.0-1908.9058.el7.x86_64
srvadmin-storelib-8.3.0-1908.9058.el7.x86_64
srvadmin-storage-snmp-8.3.0-1908.9058.el7.x86_64
srvadmin-omacs-8.3.0-1908.9058.el7.x86_64
srvadmin-racdrsc-8.3.0-1908.9058.el7.x86_64
srvadmin-idracadm-8.3.0-1908.9058.el7.x86_64
srvadmin-idrac-snmp-8.3.0-1908.9058.el7.x86_64
srvadmin-realssd-8.3.0-1908.9058.el7.x86_64
srvadmin-storage-8.3.0-1908.9058.el7.x86_64
srvadmin-all-8.3.0-1908.9058.el7.x86_64
srvadmin-hapi-8.3.0-1908.9058.el7.x86_64
srvadmin-deng-snmp-8.3.0-1908.9058.el7.x86_64
srvadmin-server-cli-8.3.0-1908.9058.el7.x86_64
srvadmin-jre-8.3.0-1908.9058.el7.x86_64
srvadmin-idrac-8.3.0-1908.9058.el7.x86_64</code></pre>
</li>
</ol>
Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-29189275953885127322016-06-22T14:42:00.001-05:002016-06-22T14:42:37.805-05:00How To: Enable PXE and Configure Boot Order Via Dell RACADM CommandOur HPC cluster was lucky enough double in compute capacity recently. Whoop! The new hardware brought with it some significant changes in rack layout and networking fabric. The compute nodes are a combination of Dell R630, DSS1500 and R730 (for GPU K80 and Intel Phi nodes).<br />
<br />
The existing core 10GbE CAT6 based fabric (made up of <a href="http://www.dell.com/us/business/p/force10-s4820/pd" target="_blank">Dell Force10 S4820T</a>) was replaced by a <a href="http://en.community.dell.com/techcenter/networking/w/wiki/6449.dell-networking-z9500" target="_blank">Dell Force10 Z9500</a> and fiber (Z9500 has 132 QSFP+ 40GbE ports that can in turn be broken out into 528 10GbE SFP+ ports).<br />
<br />
Physical changes aside (like wiring, top of rack 40GbE to 10GbE breakout panels, etc...), the above meant we had to change the primary boot device from the addon 10GbE CAT6 based NIC to the onboard fiber 10GbE NIC (CentOS 7 sees this interface as <b>eno1</b>).<br />
<br />
This required two changes at the system BIOS / nic hardware config level<br />
<ul>
<li>Enable PXE boot on the NIC</li>
<li>Modify the BIOS boot order</li>
</ul>
One method to make these two changes in bulk is to use the Dell OpenManage command line tool <b>racadm</b>, which was what we decided to use.<br />
<br />
The following are notes I took while working on a subset of the compute nodes.<br />
<h4>
Enable PXE on the fiber interface
</h4>
<div>
The first step is to identify the names of the network interfaces. I queried a single node to get the full list of interfaces followed by querying the specific interface (.1) just for grins to see what settings were available. In this case the first integrated port is referenced as <b>NIC.nicconfig.1</b> and <b>NIC.Integrated.1-1-1</b></div>
<pre class="source-code"><code># Get list of Nics
racadm -r 172.16.3.48 -u root -p xxxxxxx get nic.nicconfig
NIC.nicconfig.1 [Key=NIC.Integrated.1-1-1#nicconfig]
NIC.nicconfig.2 [Key=NIC.Integrated.1-2-1#nicconfig]
NIC.nicconfig.3 [Key=NIC.Integrated.1-3-1#nicconfig]
NIC.nicconfig.4 [Key=NIC.Integrated.1-4-1#nicconfig]
NIC.nicconfig.5 [Key=NIC.Slot.3-1-1#nicconfig]
NIC.nicconfig.6 [Key=NIC.Slot.3-2-1#nicconfig]
racadm -r 172.16.3.48 -u root -p xxxxxxx get nic.nicconfig.1
[Key=NIC.Integrated.1-1-1#nicconfig]
LegacyBootProto=NONE
#LnkSpeed=AutoNeg
NumberVFAdvertised=64
VLanId=0
WakeOnLan=Disabled
</code></pre>
<div>
Next we can enable PXE boot on NIC.Integrated.1-1-1 for the set of nodes. In order for the change to take affect you have to create a job followed by a reboot.</div>
<pre class="source-code"><code>for n in {48..10} ; do
ip=172.16.3.${n}
echo "IP: $ip - configuring nic.nicconfig.1.legacybootproto PXE"
# Get Nic config for integrated port 1
racadm -r $ip -u root -p xxxxxxx get nic.nicconfig.1 | grep Legacy
# Set to PXE
racadm -r $ip -u root -p xxxxxxx set nic.nicconfig.1.legacybootproto PXE
# Verify it's set to PXE (pending)
racadm -r $ip -u root -p xxxxxxx get nic.nicconfig.1 | grep Legacy
# Create a job to enable the changes following the reboot
racadm -r $ip -u root -p xxxxxxx jobqueue create NIC.Integrated.1-1-1
# reboot so that the configur job will execute
ipmitool -I lanplus -H $ip -U root -P xxxxxxx chassis power reset
done </code></pre>
<h4>
Configure the BIOS boot order
</h4>
<div>
Now that the NIC has PXE enabled and the changes have been applied, the boot order can be modified. If this fails for a node it either means that the job failed to run in the previous step, start debugging.</div>
<pre class="source-code"><code>for n in {48..10} ; do
ip=172.16.3.${n}
echo "IP: $ip - configuring BIOS.biosbootsettings.BootSeq NIC.Integrated.1-1-1,...."
# Get Bios Boot sequence
racadm -r $ip -u root -p xxxxxxx get BIOS.biosbootsettings.BootSeq | grep BootSeq
# Set Bios boot sequence
racadm -r $ip -u root -p xxxxxxx set BIOS.biosbootsettings.BootSeq NIC.Integrated.1-1-1,NIC.Integrated.1-3-1,NIC.Slot.3-1-1,Optical.SATAEmbedded.J-1,HardDisk.List.1-1
# Create a BIOS reboot job so that the boot order changes are applied
racadm -r $ip -u root -p xxxxxxx jobqueue create BIOS.Setup.1-1 -r pwrcycle -s TIME_NOW -e TIME_NA
done
</code></pre>
<h4>
Modify the switch and port locations in BrightCM</h4>
We use <a href="http://www.brightcomputing.com/product-offerings/bright-cluster-manager-for-hpc" target="_blank">Bright Computing Cluster Manager for HPC</a> to manage our HPC nodes (this recently replaced <a href="http://www.rocksclusters.org/wordpress/" target="_blank">Rocks</a> in our environment). Within BrightCM we had to modify the boot interface for the set of compute nodes. BrightCM provides excellent CLI support, hooray!<br />
<pre class="source-code"><code>cmsh -c "device foreach -n node0001..node0039 (interfaces; use bootif; set networkdevicename eno1); commit"
</code></pre>
<h4>
Update the switch port locations in BrightCM</h4>
<div>
BrightCM keeps track of the switch port to node NIC mapping. One reason for this is to prevent accidentally imprinting the wrong image on nodes that got swapped (i.e. you remove two nodes for service and insert them back into the rack in the wrong location).</div>
<div>
First I had to identify the new port number for a node, I chose the node that would be last in sequence on the switch. This happened to show up in BrightCM as port 171. I found this by PXE booting the compute node, once it comes up BrightCM notices a discrepancy and displays an interface that allows you to manually address the issue, something akin to "node0039 should be on switch XXX port ## but it's showing up on switch z9500-r05-03-38 port 171" blah blah blah.</div>
<div>
Instead of manually addressing the issue, it can be done via the CLI in bulk (assuming there's a sequence). Each of our nodes have two NICs wired to the Z9500 (node0039 would be on ports 171 and 172), thus in the code below I decrement by 2 ports for each node's boot device.</div>
<pre class="source-code"><code>port=171
for n in {39..26} ; do
let port=${port}-2
cmsh -c "device; set node00${n} ethernetswitch z9500-r05-03-38:${port}; commit ;"
done
unset port
</code></pre>
Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-81855953547567335092015-08-07T12:15:00.001-05:002016-06-22T09:36:42.535-05:00How To: Clear Dell iDRAC Job QueueI'm in the process of deploying 41 new Dell R630 PowerEdge servers in our HPC environment. To help manage the hardware I'm using a new tool (to us anyways), <a href="https://marketing.dell.com/ome-software" target="_blank">Dell OpenManage Essentials</a>.<br />
<br />
OME requires a Microsoft Windows OS, luckily (since we are a Linux shop) it's a snap to install Windows Server 2012 in KVM.<br />
<br />
Some of the functionality provided by OME:<br />
<br />
<ul>
<li>Reporting and alerting</li>
<li>Firmware upgrades</li>
<li>Configuration deployment (BIOS settings, iDRAC, RAID, etc...)</li>
<li>Bare metal provisioning</li>
</ul>
<div>
While OME is free, some of the features require a license. I've only been using OME for a couple of days so I haven't had a chance to test all of its features, but I have found that configuration requires a license (ex: ability to push a configuration template out to a node(s)). Firmware upgrades and reporting do not require a license.</div>
<div>
<br /></div>
<div>
The first task to be handled by OME, firmware upgrades on all 41 nodes. My initial attempts failed. Reading through the logs revealed that the remote clients couldn't reach TCP port 1278 on the OME server. Firmware upgrades started deploying after opening that TCP 1278 in the Windows firewall.</div>
<div>
<br /></div>
<div>
Each server had a long list of upgrades including BIOS, iDRAC, and the 6 network cards (mix of 10Gbit and 1Gbit). All of the firmware deployed successfully, with the exception of the Ethernet cards. Grrr, back to the scanning the logs.</div>
<div>
<br /></div>
<br />
<div>
<span style="font-family: Courier New, Courier, monospace;">Results: </span></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Downloading Packages.</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Calling InstallFromUri method to Download packages to the iDRAC </span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> There are some pending reboot jobs on the iDRAC that maybe block updating the system. <b style="background-color: yellow;">It is recommended that you clear all the jobs before updating</b>. </span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Downloading Package: Network_Firmware_6FD9P_WN64_16.5.20_A00.EXE onto the iDRAC </span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Package download has successfully started and the Job ID is JID_388846411941</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> The URI given to the iDRAC to download from: http://192.168.2.69:1278/install_packages/Packages/Network_Firmware_6FD9P_WN64_16.5.20_A00.EXE</span></div>
<div>
<br /></div>
<div>
Ok, but how do you do this? I didn't see any native way to do this from within OME, so on to Google.</div>
</div>
<br />
Thanks to <a href="http://www.jonmunday.net/2013/06/20/dell-vc-plugin-idrac-queued-jobs" target="_blank">this post</a> on Jon Munday's blog, I was able to clear the pending jobs with a little PowerShell for loop action to hit all nodes.<br />
<br />
The following command displays the job queue for the range of compute nodes (192.16.2.10 thru 50)<br />
<br />
<pre class="source-code"><code>For ($i=10; $i -lt 51; $i++) { winrm e cimv2/root/dcim/DCIM_LifecycleJob -u:$USER -p:$PASSWORD -SkipCNcheck -SkipCAcheck -r:https://192.168.2.$i/wsman -auth:basic -encoding:utf-8 }
</code></pre>
The next command clears the queue. Sorry for the long single line, I don't know if PowerShell supports spanning a command across multiple lines like I can do in Bash:<br />
<br />
<pre class="source-code"><code>For ($i=10; $i -lt 51; $i++) { winrm invoke DeleteJobQueue "cimv2/root/dcim/DCIM_JobService?CreationClassName=DCIM_JobService+Name=JobService+SystemName=Idrac+SystemCreationClassName=DCIM_ComputerSystem" '@{JobID="JID_CLEARALL"}' -r:https://192.168.2.$i/wsman -u:$USER -p:$PASSWORD -SkipCNCheck -SkipCACheck -auth:basic -encoding:utf-8 -format:pretty }
</code></pre>
<br />
<br />Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-37777527769142110662015-05-29T23:24:00.001-05:002015-05-31T13:43:16.513-05:00A Linux User's First Time on a MacI'm in the process of switching from a <a href="http://www.dell.com/learn/us/en/555/campaigns/xps-linux-laptop?c=us&l=en&s=biz" target="_blank">Dell XPS 13 (Project Sputnik)</a> ultrabook to a Macbook Pro 13 (Broadwell i7). The battery life on my current laptop was horrible, an hour of real world use with the screen dimmed and killing running applications like Scrooge. Unfortunately, the XPS solders the battery onboard, so it's not easily replaced (if at all?). It still functions great when near power.<br />
<br />
You may be thinking, why would you do such a thing if you are a Linux user?<br />
<br />
For starters, the specs for the Macbook Pro 13 (Spring 2015) are better than the XPS for a similar cost (how much is of course open to interpretation):<br />
<ul>
<li>Intel 3.1 GHz Core i7-5557U vs 3.0 GHz Core i7-5500U Processor</li>
<li>16 GB (optional) vs 8 GB RAM</li>
<li>Intel Iris 6100 vs Intel HD Graphics 5500</li>
<li>PCIe based SSD vs SATA</li>
<li>Non-touch Retina Display vs touch UltraSharp QHD+ (personally I don't see the value yet in having a touch screen on a Linux laptop, especially considering the screen doesn't completely fold over)</li>
<li>Magsafe Power Adapter</li>
<li>Proven battery life</li>
</ul>
<div>
I do most of my work via SSH to the Linux systems, so the workstation doesn't have to be Linux, although it makes life much less painful. I figured, what the heck, let's try a BSD like system that has a history of awesome battery life.</div>
<div>
<br /></div>
<div>
<a href="http://www.quickanddirtytips.com/education/grammar/ado-versus-adieu?page=all" target="_blank">Without further ado</a>, here are some of my experiences using a Mac and OSX for the first time as a Linux user.</div>
<h2>
Useful Applications</h2>
<div>
<ul>
<li><a href="http://ohmyz.sh/" target="_blank">Oh My Zsh</a> - Trying out Zsh shell as an alternative for Bash for the first time, pretty darn cool.</li>
<li><a href="http://lightheadsw.com/caffeine/" target="_blank">Caffeine</a> - Useful for temporarily preventing the laptop from sleeping (don't kill my SSH or VPN connections, dangit!</li>
<li><a href="https://iterm2.com/" target="_blank">iTerm2</a> - Really nice terminal replacement for the builtin OSX terminal. <a href="https://iterm2.com/features.html" target="_blank">Ton's of features</a> like built in Tmux, search, transparent background, it's own built in auto completion (Cmd ;), etc... This part was what got me searching for a new terminal in the first place "Coming from a Unix world? You'll feel at home with focus follows mouse, copy on select, middle button paste, and keyboard shortcuts to avoid mousing."</li>
<li><a href="http://magicprefs.com/" target="_blank">MagicPrefs</a> - This app lets you configure middle mouse paste functionality on the trackpad (set mine to three finger press)</li>
<li><a href="https://itunes.apple.com/us/app/xchat-azure/id447521961?mt=12" target="_blank">XChat Azure</a> - Excellent IRC client</li>
<li><a href="http://www.barebones.com/products/textwrangler/" target="_blank">TextWrangler</a> - Extremely nice graphical script editor</li>
<li><a href="https://products.office.com/en-us/mac/mac-preview" target="_blank">Microsoft Office 2016 Preview</a> - Because I need it Office work</li>
</ul>
</div>
<h2>
Graphical Text Editor</h2>
<div>
I use vi/vim extensively on Linux and now on my Macbook Pro. That said, I do like to edit in a GUI text editor as well. After a good bit of searching around, <a href="http://www.barebones.com/products/textwrangler/" target="_blank">TextWrangler</a> is the (free) one I've been most happy using.</div>
<div>
<br /></div>
<div>
The way I understand it, TextWrangler is sort of the little brother to the professional product BBEdit, which adds "its extensive professional feature set including Web authoring capabilities and software development tools". I primarily work with Ruby (shell scripts, not Rails), Perl, Bash, Puppet and other system management type scripting, TextWrangler works very well for these.</div>
<div>
<br /></div>
<div>
One thing I found missing that I use regularly in other editors is the ability to duplicate a line without the cumbersome highlight, copy, paste. Many GUI editors provide this ability using a shortcut like Ctrl + d, or in vi, yy p (yank yank paste).</div>
<div>
<br /></div>
<div>
After searching around in the keyboard shortcuts a Google search led me to <a href="https://xinrongding.wordpress.com/2013/08/15/add-scripts-to-textwrangler/" target="_blank">this post</a> which mentioned creating an "<a href="http://www.macosxautomation.com/applescript/firsttutorial/02.html" target="_blank">AppleScript</a>" to accomplish the task. What tha?</div>
<div>
<br /></div>
<div>
While the code did work, it left both the original and new lines highlighted, which was a bit annoying. I decided I wanted the cursor to remain where it originally was located. By the way, in TextWrangler, the "<b>cursor</b>" is called the "<b>insertion point</b>" both in the documentation and in AppleScript.</div>
<div>
<br /></div>
<div>
So, here's my updated script (my changes are the single lines following each comment):</div>
<div>
<pre class="source-code"><code>
tell application "TextWrangler"
tell window 1
# Get the current position for the cursor so we can pace it back
set cursorLoc to characterOffset of selection
select line the (startLine of the selection)
copy (contents of the selection) as text to myText
set the contents of the selection to myText & myText
# Move the cursor back to the first column with a character
select insertion point before character cursorLoc
end tell
end tell
</code></pre>
</div>
<br />
I searched all over the place to get a hint how to place the cursor back in it's original location. As you can see, AppleScript isn't syntactically like Ruby, Perl, Java, etc... It's pretty funky and totally dynamic based on the application being scripted. TextWrangler provides a dictionary, but I didn't find they contents very helpful.<br />
<br />
I finally stumbled on <a href="http://nathangrigg.net/2012/06/bbedit-search-with-applescript/" target="_blank">this post</a> that mentions using "characterOffset of selection". Voila! All in all, it's pretty darn cool that the OS provides a simple way to extend the functionality of a GUI app.<br />
<br />
Create the script in the AppleScript Editor (either launch it from Spotlight Search (Command Space) or click the script menu next to Help in TextWrangler and "Open Script Editor".<br />
<br />
Copy and paste the code above, then save it in the directory<br />
"<b><span style="font-family: Courier New, Courier, monospace;">~/Library/Application\ Support/TextWrangler/Scripts</span></b>"<br />
as something like DuplicateLine.scpt (the extension will get added automatically).<br />
<br />
Next, restart TextWrangler and go to Window -> Palettes -> Scripts, click on DuplicateLine in the list and click Set Shortcut. Set it to whatever, I set mine to Ctrl D. This shortcut is already set to delete line, so I altered that shortcut in preferences to Shift Ctrl D.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-30285020997375204242014-07-09T18:04:00.000-05:002014-07-10T11:54:51.514-05:00Using Check_Openmanage with Check_MK via WATO<p>In an <a href="http://flakrat.blogspot.com/2011/03/using-checkopenmanage-with-checkmk.html">older post</a> I described the steps to integrate <a href="http://folk.uio.no/trondham/software/check_openmanage.html">check_openmanage</a> Nagios plugin with <a href="http://mathias-kettner.com/check_mk_introduction.html">check_mk</a>. This approach required manually editing the etc/check_mk/main.mk file to configure the extra_nagios_conf and legacy_checks.</p>
<p>This updated guide uses the check_mk <a href="http://mathias-kettner.com/checkmk_wato.html">WATO</a> (Web Administration Tool) to integrate the check_openmanage check using a feature called "Active Checks".</p>
<p>Here's the guide, hope it helps:</p>
<h2>Environment:</h2>
<ul><li>CentOS 6.5 x86_64 VMware ESXi hosted virtual machine<br />
<li><a href="http://omdistro.org/">OMD 1.10</a><br />
<li><a href="http://folk.uio.no/trondham/software/check_openmanage.html">check_openmanage 3.7.11</a><br />
<li><a href="http://mathias-kettner.de/check_mk_introduction.html">Check_mk 1.2.5i4</a> upgraded using <a href="http://lists.mathias-kettner.de/pipermail/omd-users/2014-June/000879.html">these steps</a><br />
<li>Dell servers are checked via check_openmanage using SNMP rather than MRPE (or NRPE)<br />
</ul>
<h2>Install Check_openmanage</h2>
Unless otherwise specified all paths are relative to the site owners home (ex: /opt/omd/sites/mysite)
<ol><li>Make sure your dell servers had the following SNMP packages installed prior to installing OMSA (if not, it's easy to 'yum remove srvadmin-\*' 'yum install srvadmin-all': net-snmp, net-snmp-libs, net-snmp-utils<br />
<ul><li>Start the OMSA services 'srvadmin-services.sh start' and then check 'srvadmin-services.sh status' to verify that the snmpd component is running<br />
<li>Ensure that snmpd is running and configured<br />
<li>Configure the firewall to allow access from your OMD server to udp port 161<br />
</ul>
<li>change users on your OMD server to the site user: $ su - mysite<br />
<li>Download the latest check_openmanage from <a href="http://folk.uio.no/trondham/software/check_openmanage.html">http://folk.uio.no/trondham/software/check_openmanage.html</a> to ~/tmp and extract<br />
<li>copy the check_openmanage script to local/lib/nagios/plugins (this defaults to $USER2$ in your commands)<br />
<pre class="source-code"><code>
$ cp tmp/check_openmanage-3.7.11/check_openmanage local/lib/nagios/plugins/
$ chmod +x local/lib/nagios/plugins/check_openmanage
</code></pre><li>copy the PNP4Nagios template<br />
<pre class="source-code"><code>
$ cp tmp/check_openmanage-3.7.11/check_openmanage.php etc/pnp4nagios/templates/
</code></pre>
<li>Test check_openmanage to see that it can successfully query a node<br />
<pre class="source-code"><code>
local/lib/nagios/plugins/check_openmanage -H dell-r720xd-01 -p -C MySecretCommunity
OK - System: 'PowerEdge R720xd', SN: 'XXXXXX1', 24 GB ram (6 dimms), 2 logical drives, 14 physical drives|T0_System_Board_Inlet=21C;42;47 T1_System_Board_Exhaust=30C;70;75 T2_CPU1=48C;86;91 T3_CPU2=39C;86;91 W2_System_Board_Pwr_Consumption=126W;0;0 A0_PS1_Current_1=0.6A;0;0 A1_PS2_Current_2=0.2A;0;0 V25_PS1_Voltage_1=240V;0;0 V26_PS2_Voltage_2=240V;0;0 F0_System_Board_Fan1=2280rpm;0;0 F1_System_Board_Fan2=2280rpm;0;0 F2_System_Board_Fan3=2280rpm;0;0 F3_System_Board_Fan4=3000rpm;0;0 F4_System_Board_Fan5=3600rpm;0;0 F5_System_Board_Fan6=3480rpm;0;0
</code></pre>
</ol>
<h2>WATO Configuration</h2>
<ol>
<li>Create a Host Group by clicking <b>Host Groups</b> under <b>WATO - Configuration</b>, click <b>New Group</b> (click save when done):
<ul>
<li>Name: omsa
<li>Alias: Dell OpenManage
</ul>
<li>Create a Host Tag by clicking <b>Host Tags</b> under <b>WATO - Configuration</b>, click <b>New Tag Group</b> (click save when done):
<ul>
<li>Internal ID: dellomsa
<li>Topic: (leave empty)
<li>Choices:
<ul>
<li>Tag ID: omsa
<li>Description: Dell OpenManage
</ul>
</ul>
<li>Create a Active Check by clicking <b>Host & Service Parameter</b> under <b>WATO - Configuration</b>, click <b>Active Checks</b>, click <b>Classical active and passive Nagios checks</b> (create a new one, click save when done):
<ul>
<li>Folder: Main Directory
<li>Host Tags: Select <b>Dell OpenManage is set</b>
<li>Service Description: check_openmanage
<li>Commmand Line: $USER2$/check_openmanage -H $HOSTADDRESS$ -p -C MySecretCommunity
<li>Service Description: check_openmanage
<li>Check Perfomance Data
</ul>
<li>Add the omsa Host Tag to a host running OpenManage with SNMP configured by clicking <b>Hosts</b> under <b>WATO - Configuration</b>, and click the properties editor (pencil icon) for the host (click <b>Save & go to Services</b> when done):
<ul>
<li>Host tags: Dell OpenManage: check Dell OpenManage twice
</ul>
</ol>
<br />
On the Host services page you should see the new service at the bottom, example:
<pre>
Custom checks (defined via rule)
Status Checkplugin Item Service Description Plugin output
OK custom check_openmanage check_openmanage OK - System: 'PowerEdge R710', SN: 'XXXXXX1', 24 GB ram (6 dimms), 2 logical drives, 14 physical drives
</pre>
Click <b>Activate Missing</b> services or <b>Save manual check configuration</b>. Activate the changes and you should start seeing the check within a few minutes and graphs after 10 minutes or so.
Hope this helps, and comments are welcome.Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-18634955214989020122014-06-24T11:58:00.001-05:002014-08-12T08:56:25.977-05:00Replace The Foreman Self Signed Certificate with a Trusted Certificate<p>I've installed a few <a href="http://theforeman.org/">Foreman</a> servers to provide provisioning and configuration management (via <a href="https://puppetlabs.com/puppet/puppet-open-source">Puppet</a>). This document will cover the steps to replace the self signed certificate used for the web interface with a trusted certificate.</p>
<p>For those unfamiliar with Foreman and Puppet, here are snippet from both project pages (<a href="http://theforeman.org/learn_more.html">http://theforeman.org/learn_more.html</a> and <a href="http://puppetlabs.com/puppet/what-is-puppet">http://puppetlabs.com/puppet/what-is-puppet</a>:</p>
<blockquote>Foreman is an open source project that helps system administrators manage servers throughout their lifecycle, from provisioning and configuration to orchestration and monitoring. Using Puppet or Chef and Foreman's smart proxy architecture, you can easily automate repetitive tasks, quickly deploy applications, and proactively manage change, both on-premise with VMs and bare-metal or in the cloud.<br /><br />
Foreman provides comprehensive, interaction facilities including a web frontend, CLI and RESTful API which enables you to build higher level business logic on top of a solid foundation.</blockquote>
<blockquote>Puppet is IT automation software that defines and enforces the state of your infrastructure throughout your software development cycle. From provisioning and configuration to orchestration and reporting, from initial code development through production release and updates, Puppet frees sysadmins from writing one-off, fragile scripts and other manual tasks. At the same time, Puppet ensures consistency and dependability across your infrastructure.<br /><br />
With Puppet, repetitive tasks are automated away, so sysadmins can quickly deploy business applications, scaling easily from tens of servers to thousands, both on-premise and in the cloud.</blockquote>
<p>By default, the <a href="https://puppetlabs.com/puppet/puppet-open-source">Puppet</a> / <a href="http://theforeman.org/">Foreman</a> <a href="http://theforeman.org/manuals/1.5/index.html#3.InstallingForeman">server install</a> uses Puppet's own internal CA for issuing SSL certificates. The Foreman install defaults to using the Puppet CA self signed cert for the web interface. <b>The following steps will replace The Foreman's SSL certificate for the user web interface, but will leave the Puppet CA and SSL certs in place for Puppet related work.</b></p>
<p>I spent a bit of time trying to get this working, but each attempt resulted in a working web interface and broken Puppet master to client communications and Foreman proxy. Essentially, I was changing the SSL certificate entries in too many locations. Dominic on the #theforeman channel on FreeNode IRC directed me to this <a href="https://groups.google.com/forum/#!topic/foreman-users/MMug-F4hNHg">Google Groups thread</a> that listed the short list of places to make the change. The following steps are based on that post.</p>
<ol><li>Create the SSL key and csr
<br /><pre class="brush:bash">sudo su -
mkdir /root/Incommon-cert
cd /root/Incommon-cert
openssl req -out $(hostname)-2048.csr -new -newkey rsa:2048 -nodes -keyout $(hostname -f)-2048.key</pre>
<li>Copy the contents of the csr to the clipboard and use it to request an InCommon SSL certificate
<li>Once the cert is approved, download the following files to /root/Incommon-cert on the Puppet / Foreman server:
<ul><li>as X509 Certificate only, Base64 encoded
<li>as X509 Intermediates/root only, Base64 encoded</ul>
<li>Rename the files so that we know these are InCommon files
<br /><pre class="brush:bash">mv puppet.tld.blah.crt puppet.tld.blah-2048-incommon-cert.crt
mv puppet.tld.blah_interm.crt puppet.tld.blah-2048-incommon-interm.crt
chown root:root *.crt</pre>
<li>Copy the files to the appropriate directories
<br /><pre class="brush:bash">cp puppet.tld.blah-2048.key /var/lib/puppet/ssl/private_keys/
cp puppet.tld.blah-2048-incommon-cert.crt /var/lib/puppet/ssl/certs/
cp puppet.tld.blah-2048-incommon-interm.crt /var/lib/puppet/ssl/certs/
wget https://www.incommon.org/certificates/repository/incommon-ssl.ca-bundle -O /var/lib/puppet/ssl/certs/incommon-ssl.ca-bundle</pre>
<li>Set the appropriate permissions and SELinux configs for the key
<br /><pre class="brush:bash">cd /var/lib/puppet/ssl/private_keys/
chown puppet:puppet *.key
chmod 640 *.key
chcon -u system_u -r object_r -t puppet_var_lib_t *.key
ls -lZ
-rw-r-----. puppet puppet system_u:object_r:puppet_var_lib_t:s0 puppet.tld.blah-2048.key
-rw-r-----. puppet puppet system_u:object_r:puppet_var_lib_t:s0 puppet.tld.blah.pem</pre>
<li>Set perms and SELinux for the certs
<br /><pre class="brush:bash">cd /var/lib/puppet/ssl/certs/
chown puppet:puppet *
chcon -u system_u -r object_r -t puppet_var_lib_t *.crt
ls -lZ
-rw-r--r--. puppet puppet system_u:object_r:puppet_var_lib_t:s0 ca.pem
-rw-r--r--. puppet puppet system_u:object_r:puppet_var_lib_t:s0 incommon-ssl.ca-bundle
-rw-r--r--. puppet puppet system_u:object_r:puppet_var_lib_t:s0 puppet.tld.blah-2048-incommon-cert.crt
-rw-r--r--. puppet puppet system_u:object_r:puppet_var_lib_t:s0 puppet.tld.blah-2048-incommon-interm.crt
-rw-r--r--. puppet puppet system_u:object_r:puppet_var_lib_t:s0 puppet.tld.blah.pem</pre>
<li>Next edit the various config files
<ul><li>/etc/puppet/node.rb: Change the line <b>:ssl_ca</b> to use the new interm cert
<br /><pre class="brush:diff">--- /etc/puppet/node.rb.orig 2014-03-24 17:48:09.215000045 -0500
+++ /etc/puppet/node.rb 2014-06-24 10:24:51.049282905 -0500
@@ -8,7 +8,8 @@
:facts => true, # true/false to upload facts
:timeout => 10,
# if CA is specified, remote Foreman host will be verified
- :ssl_ca => "/var/lib/puppet/ssl/certs/ca.pem", # e.g. /var/lib/puppet/ssl/certs/ca.pem
+ #:ssl_ca => "/var/lib/puppet/ssl/certs/ca.pem", # e.g. /var/lib/puppet/ssl/certs/ca.pem
+ :ssl_ca => "/var/lib/puppet/ssl/certs/puppet.tld.blah-2048-incommon-interm.crt", # e.g. /var/lib/puppet/ssl/certs/ca.pem
# ssl_cert and key are required if require_ssl_puppetmasters is enabled in Foreman
:ssl_cert => "/var/lib/puppet/ssl/certs/puppet.tld.blah.pem", # e.g. /var/lib/puppet/ssl/certs/FQDN.pem
:ssl_key => "/var/lib/puppet/ssl/private_keys/puppet.tld.blah.pem" # e.g. /var/lib/puppet/ssl/private_keys/FQDN.pem</pre>
<li>/usr/lib/ruby/site_ruby/1.8/puppet/reports/foreman.rb: Change the <b>foreman_ssl_ca =</b> line to use the interm cert
<br /><pre class="brush:diff">--- /usr/lib/ruby/site_ruby/1.8/puppet/reports/foreman.rb.orig 2014-03-24 17:44:37.494000046 -0500
+++ /usr/lib/ruby/site_ruby/1.8/puppet/reports/foreman.rb 2014-06-24 10:28:54.497406986 -0500
@@ -5,7 +5,8 @@
# URL of your Foreman installation
$foreman_url='https://puppet.tld.blah'
# if CA is specified, remote Foreman host will be verified
-$foreman_ssl_ca = "/var/lib/puppet/ssl/certs/ca.pem"
+#$foreman_ssl_ca = "/var/lib/puppet/ssl/certs/ca.pem"
+$foreman_ssl_ca = "/var/lib/puppet/ssl/certs/puppet.tld.blah-2048-incommon-interm.crt"
# ssl_cert and key are required if require_ssl_puppetmasters is enabled in Foreman
$foreman_ssl_cert = "/var/lib/puppet/ssl/certs/puppet.tld.blah.pem"
$foreman_ssl_key = "/var/lib/puppet/ssl/private_keys/puppet.tld.blah.pem"</pre>
<li>/etc/httpd/conf.d/05-foreman-ssl.conf: Change three lines <b>SSLCertificateFile</b>, <b>SSLCertificateKeyFile</b> and <b>SSLCertificateChainFile</b> to use the new cert, key and CA bundle respectively
<br /><pre class="brush:diff">--- /etc/httpd/conf.d/05-foreman-ssl.conf.orig 2014-06-24 10:30:59.917531640 -0500
+++ /etc/httpd/conf.d/05-foreman-ssl.conf 2014-06-24 10:32:36.318164714 -0500
@@ -35,11 +35,18 @@
## SSL directives
SSLEngine on
- SSLCertificateFile /var/lib/puppet/ssl/certs/puppet.tld.blah.pem
- SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet.tld.blah.pem
- SSLCertificateChainFile /var/lib/puppet/ssl/certs/ca.pem
+ SSLCertificateFile "/var/lib/puppet/ssl/certs/puppet.tld.blah-2048-incommon-cert.crt"
+ SSLCertificateKeyFile "/var/lib/puppet/ssl/private_keys/puppet.tld.blah-2048.key"
+ SSLCertificateChainFile "/var/lib/puppet/ssl/certs/incommon-ssl.ca-bundle"
SSLCACertificatePath /etc/pki/tls/certs
SSLCACertificateFile /var/lib/puppet/ssl/certs/ca.pem
+
+# SSLCertificateFile /var/lib/puppet/ssl/certs/puppet.tld.blah.pem
+# SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet.tld.blah.pem
+# SSLCertificateChainFile /var/lib/puppet/ssl/certs/ca.pem
+# SSLCACertificatePath /etc/pki/tls/certs
+# SSLCACertificateFile /var/lib/puppet/ssl/certs/ca.pem
+
SSLVerifyClient optional
SSLVerifyDepth 3
SSLOptions +StdEnvVars</pre></ul>
<li>Restart the services (foreman-proxy restart probably isn't necessary but may as well)
<br /><pre class="brush:bash">service httpd restart
service foreman-proxy restart</pre></ol>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-11555827458202806282013-08-28T16:33:00.000-05:002013-08-29T17:06:47.748-05:00Fun with Bash :: SumsI frequently find the need to sum up long lists of numbers from the Bash command shell. The following is a nice bit of code that sums up the bytes contained in a series of files and prints out the each sum in GB.<br />
<br />
The current directory has files *-bytes.log that each contain a single column with a number of bytes on each row, example:<br />
<br />
<pre>53824
247104
61776
53824
53824
53824
247104
53824
247104
542</pre>
<br />
The code contains an outer loop to iterate the files and an inner loop to process the contents of the files
<br />
<pre class="brush:bash">for file in $(ls *-bytes.log); do
sum=0;
for num in $(cat $file); do
let sum=$sum+$num;
done
echo "$sum/1024/1024/1024" | bc -l | xargs printf "$file %1.2f GB\n";
done</pre>
<br />
This is an alternative approach using an inner while loop to read the sums rather than cat'ing the file
<br />
<pre class="brush:bash">for file in $(ls *-bytes.log); do
sum=0;
while read num; do
let sum=$sum+$num;
done < $file
echo "$sum/1024/1024/1024" | bc -l | xargs printf "$file %1.2f GB\n";
done</pre>
<br />
Sample results<br />
<pre>user1-bytes.log 3.81 GB
user2-bytes.log 4.75 GB
user3-bytes.log 1.40 GB
user4-bytes.log 2.03 GB
user5-bytes.log 5.01 GB
...
</pre>
<br />
I've also found the following bit of code helpful in my $HOME/.bashrc to make quick calculations easy (this comes from a <a href="http://www.cyberciti.biz/" target="_blank">nixCraft</a> Facebook post:<br />
<br />
<pre class="brush:bash"># nixCraft (on Facebook) calc recipe
# Usage: calc "10+2"
# Pi: calc "scale=10; 4*a(1)"
# Temperature: calc "30 * 1.8 + 32"
# calc "(86 - 32)/1.8"
calc() { echo "$*" | bc -l; }
</pre>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-1586626624975334272013-06-13T13:46:00.000-05:002013-08-21T10:08:59.928-05:00Building GCC 4.8.1 on CentOS 5.9<h3>
References:</h3>
<ul>
<li><a href="http://verahill.blogspot.com/2012/07/compiling-gcc-471gfortran-471-on-centos.html">http://verahill.blogspot.com/2012/07/compiling-gcc-471gfortran-471-on-centos.html</a> - I borrowed heavily from Lindqvist post along with some of modulefile from the comment by Mark Dwyer
</li>
<li><a href="http://dollarvar.blogspot.com/2013/03/building-gcc-48.html">http://dollarvar.blogspot.com/2013/03/building-gcc-48.html</a> - Borrowed additional modulefile bits from this post
</li>
<li><a href="http://gcc.gnu.org/install/configure.html">http://gcc.gnu.org/install/configure.html</a>
</li>
<li><a href="http://gcc.gnu.org/install/prerequisites.html">http://gcc.gnu.org/install/prerequisites.html</a>
</li>
</ul>
<h3>
Build:</h3>
<ol>
<li>Install packages for dependencies (readline-devel is needed for guile build, dejagnu is needed for gcc make check)
<pre class="brush:bash">sudo yum install readline readline-devel guile guile-devel dejagnu expect
sudo yum -y install readline readline-devel guile guile-devel dejagnu expect
</pre>
</li>
<li>Download and untar all the sources
<pre class="brush:bash">mkdir -p ~/sources/gcc
cd ~/sources/gcc
gccver=4.8.1
gmpver=5.1.2
mpfrver=3.1.2
mpcver=1.0.1
binutilsver=2.23.2
libtoolver=2.4.2
libunistringver=0.9.3
libffiver=3.0.13
gcver=7.2d
autogenver=5.17.4
guilever=2.0.9
prefixroot=/share/apps/tools
wget http://www.netgull.com/gcc/releases/gcc-4.8.1/gcc-${gccver}.tar.bz2
wget ftp://ftp.gnu.org/gnu/gmp/gmp-${gmpver}.tar.bz2
wget http://mpfr.loria.fr/mpfr-current/mpfr-${mpfrver}.tar.bz2
wget http://www.multiprecision.org/mpc/download/mpc-${mpcver}.tar.gz
wget http://ftp.gnu.org/gnu/binutils/binutils-${binutilsver}.tar.bz2
wget http://mirror.sbb.rs/gnu/libtool/libtool-${libtoolver}.tar.gz
wget http://ftp.gnu.org/gnu/libunistring/libunistring-${libunistringver}.tar.gz
wget ftp://sourceware.org/pub/libffi/libffi-${libffiver}.tar.gz
wget http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gc-${gcver}.tar.gz
wget http://ftp.gnu.org/gnu/autogen/rel5.17.4/autogen-${autogenver}.tar.gz
wget ftp://ftp.gnu.org/gnu/guile/guile-${guilever}.tar.gz
for file in *.{gz,bz2}; do echo tar xf $file; done
for dir in `find . -maxdepth 1 -type d | grep -v ^.$ | cut -d \/ -f2` ; do
app=$(echo $dir | cut -f1 -d-);
ver=$(echo $dir | cut -f2 -d-);
mkdir -p ${prefixroot}/${app}/${ver} ;
done
mv gc-7.2 gc-7.2d
</pre>
</li>
<li>Build gmp
<pre class="brush:bash">cd gmp-${gmpver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/gmp/${gmpver}
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
Version: GNU MP 5.1.2
Host type: coreisbr-unknown-linux-gnu
ABI: 64
Install prefix: /share/apps/tools/gmp/5.1.2
Compiler: gcc -std=gnu99
Static libraries: yes
Shared libraries: yes
Libraries have been installed in:
/share/apps/gcc/gmp/5.1.2/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
</pre>
4. Build mpfr
<pre class="brush:bash">cd ../../mpfr-${mpfrver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/mpfr/${mpfrver} \
--with-gmp=${prefixroot}/gmp/${gmpver}
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
Libraries have been installed in:
/share/apps/gcc/mpfr/3.1.2/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
</pre>
</li>
<li>Build mpc
<pre class="brush:bash">cd ../../mpc-${mpcver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/mpc/${mpcver} \
--with-gmp=${prefixroot}/gmp/${gmpver} \
--with-mpfr=${prefixroot}/mpfr/${mpfrver}
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
Libraries have been installed in:
/share/apps/gcc/mpc/1.0.1/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
</pre>
</li>
<li>Build Binutils
<pre class="brush:bash">cd ../../binutils-${binutilsver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/binutils/${binutilsver}
make
make check
make install
</pre>
</li>
<li>Build libtool (make check runs for quite a while, upwards of an hour. Runs 126 tests.)
<pre class="brush:bash">cd ../../libtool-${libtoolver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/libtool/${libtoolver}
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
## ------------- ##
## Test results. ##
## ------------- ##
105 tests behaved as expected.
21 tests were skipped.
</pre>
</li>
<li>Build libunistring
<pre class="brush:bash">cd ../../libunistring-${libunistringver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/libunistring/${libunistringver}
make
make check
make install
</pre>
</li>
<li>Build libffi
<pre class="brush:bash">cd ../../libffi-${libffiver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/libffi/${libffiver}
make
make check
make install
cd ${prefixroot}/libffi/${libffiver}
ln -s lib/libffi-${libffiver}/include
cd -
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
Libraries have been installed in:
/share/apps/tools/libffi/3.0.13/lib/../lib64
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
</pre>
</li>
<li>Build gc
<pre class="brush:bash">cd ../../gc-${gcver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/gc/${gcver}
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
Libraries have been installed in:
/share/apps/tools/gc/7.2d/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
</pre>
</li>
<li>Build guile. This keeps blowing up with the error following the code. <b>CentOS base repo actually has the guile and guile-devel packages that are new enough to support autogen, so I used the package instead. Leaving this step for future reference.</b>
<pre class="brush:bash">cd ../../guile-${guilever}
mkdir build
cd build
.././configure --prefix=${prefixroot}/guile/${guilever} \
--with-libltdl-prefix=${prefixroot}/libtool/${libtoolver} \
--with-libgmp-prefix=${prefixroot}/gmp/${gmpver} \
--with-libunistring-prefix=${prefixroot}/libunistring/${libunistringver} \
LDFLAGS="-L${prefixroot}/libtool/${libtoolver}/lib -L${prefixroot}/gmp/${gmpver}/lib -L${prefixroot}/libunistring/${libunistringver}/lib -L${prefixroot}/libffi/${libffiver}/lib -L${prefixroot}/gc/${gcver}/lib" \
CPPFLAGS="-I${prefixroot}/libtool/${libtoolver}/include -I${prefixroot}/libtool/${libtoolver}/include/libltdl -I${prefixroot}/libffi/${libffiver}/include -I${prefixroot}/gc/${gcver}/include -I${prefixroot}/gmp/${gmpver}/include" \
LIBFFI_CFLAGS=-I${prefixroot}/libffi/${libffiver}/include \
LIBFFI_LIBS=-L${prefixroot}/libffi/${libffiver}/lib \
BDW_GC_CFLAGS=-I${prefixroot}/gc/${gcver}/include \
BDW_GC_LIBS=-L${prefixroot}/gc/${gcver}/lib
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
make[1]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build'
Making all in lib
make[2]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build/lib'
make all-recursive
make[3]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build/lib'
make[4]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build/lib'
make[4]: Nothing to be done for `all-am'.
make[4]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build/lib'
make[3]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build/lib'
make[2]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build/lib'
Making all in meta
make[2]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build/meta'
make[2]: Nothing to be done for `all'.
make[2]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build/meta'
Making all in libguile
make[2]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build/libguile'
make all-am
make[3]: Entering directory `/home/mhanby/sources/gcc/guile-2.0.9/build/libguile'
CC libguile_2.0_la-finalizers.lo
../.././libguile/finalizers.c:167: error: static declaration of 'GC_set_finalizer_notifier' follows non-static declaration
/share/apps/tools/gc/7.2d/include/gc/gc.h:177: error: previous declaration of 'GC_set_finalizer_notifier' was here
make[3]: *** [libguile_2.0_la-finalizers.lo] Error 1
make[3]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build/libguile'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build/libguile'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/mhanby/sources/gcc/guile-2.0.9/build'
make: *** [all] Error 2
</pre>
</li>
<li>Build autogen
<pre class="brush:bash">cd ../../autogen-${autogenver}
mkdir build
cd build
.././configure --prefix=${prefixroot}/autogen/${autogenver}
make
make check
make install
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
------------------------------------------------------------------------
Configuration:
Source code location: ../.
Compiler: gcc -std=gnu99
Compiler flags: -g -O2
Host System Type: x86_64-unknown-linux-gnu
Install path: /share/apps/tools/autogen/5.17.4
See config.h for further configuration information.
------------------------------------------------------------------------
Libraries have been installed in:
/share/apps/tools/autogen/5.17.4/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
</pre>
</li>
<li>Build gcc (make took approximately 2.5 hours to run on a system with a 2.7GHz Intel Xeon E5-2680, make check took about 3 hours)
<pre class="brush:bash">cd ../../gcc-${gccver}
mkdir build
cd build
OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${prefixroot}/gmp/${gmpver}/lib:${prefixroot}/mpc/${mpcver}/lib:${prefixroot}/mpfr/${mpfrver}/lib:${LD_LIBRARY_PATH}
.././configure \
--prefix=${prefixroot}/gcc/${gccver} \
--with-gmp=${prefixroot}/gmp/${gmpver} \
--with-mpfr=${prefixroot}/mpfr/${mpfrver} \
--with-mpc=${prefixroot}/mpc/${mpcver} \
LD_FOR_TARGET=ld \
AS_FOR_TARGET=as \
--with-ld=${prefixroot}/binutils/${binutilsver}/bin/ld \
--with-as=${prefixroot}/binutils/${binutilsver}/bin/as \
--with-ar=${prefixroot}/binutils/${binutilsver}/bin/ar \
AR_FOR_TARGET=ar
time make
#real 150m4.427s
#user 101m21.408s
#sys 12m54.047s
OLD_PATH=${PATH}
export PATH=/share/apps/tools/autogen/5.17.4/bin:${PATH}
time make -k check
make install
export PATH=${OLD_PATH}
export LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
</pre>
<pre class="brush:bash">EXAMPLE OUTPUT
================================================
----------------------------------------------------------------------
Libraries have been installed in:
/share/apps/tools/gcc/4.8.1/lib/../lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
----------------------------------------------------------------------
Libraries have been installed in:
/share/apps/tools/gcc/4.8.1/lib/../lib64
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
</pre>
</li>
<li>Create the module file (for <a href="http://modules.sourceforge.net/">Environment Modules support</a>)
<pre class="brush:ruby">#%Module1.0####################################################################
##
## mymodule modulefile
##
## Sets up the 4.8.1 GCC compilation environment
##
set ver [lrange [split [ module-info name ] / ] 1 1 ]
set name [lrange [split [ module-info name ] / ] 0 0 ]
set loading [module-info mode load]
set subname [lrange [split $name - ] 0 0 ]
proc ModulesHelp { } {
puts stderr "\tThis module sets the environment for $name v$ver"
}
module-whatis "Set environment variables to use $name version $ver"
conflict compiler
set base /share/apps/tools
set gccdir $base/$subname/$ver
set binutilsdir $base/binutils/2.23.2
set gmpdir $base/gmp/5.1.2
set mpcdir $base/mpc/1.0.1
set mpfrdir $base/mpfr/3.1.2
## Add bin directories to the path
prepend-path PATH $gccdir/bin
prepend-path PATH $binutilsdir/bin
prepend-path LD_LIBRARY_PATH $mpfrdir/lib
prepend-path LD_LIBRARY_PATH $mpcdir/lib
prepend-path LD_LIBRARY_PATH $gmpdir/lib
prepend-path LD_LIBRARY_PATH $binutilsdir/lib
prepend-path LD_LIBRARY_PATH $binutilsdir/lib64
prepend-path LD_LIBRARY_PATH $gccdir/lib
prepend-path LD_LIBRARY_PATH $gccdir/lib64
prepend-path MANPATH $gccdir/share/man
prepend-path C_INCLUDE_PATH $gccdir/include
prepend-path CPLUS_INCLUDE_PATH $gccdir/include
setenv CC $gccdir/bin/gcc
setenv CXX $gccdir/bin/g++
setenv FC $gccdir/bin/gfortran
if { [ module-info mode load ] } {
puts stderr "Note: $name $ver environment loaded."
}
</pre>
</li>
</ol>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com3tag:blogger.com,1999:blog-5521735836171938846.post-11376394758892455512013-04-19T11:14:00.001-05:002013-06-13T10:41:42.190-05:00How To Compile Gromacs 4.6.1 with OpenMPI 1.6.4 on CentOSI took the following notes while installing Gromacs 4.6.1 recently. Hopefully others will find them useful.
The software was compiled and installed on a CentOS 5.9 x86_64 containing QDR Infiniband. The system uses SGE (Grid Engine) as it's scheduler.
Intel compilers 13.1.1 were used to compile both OpenMPI and Gromacs.
<h2>Intel Compilers</h2>
I used version 13.1.1 of the Intel compilers. I'm including the module file here for reference
<pre class="brush:ruby">#%Module
set ver 13.1.1
set arch intel64
set root /share/apps/intel/parallel_studio_xe_2013_update3/composer_xe_2013.3.163
set ccroot /opt/intel/Compiler/$ver/$ccver
set fcroot /opt/intel/Compiler/$ver/$fcver
set mklroot $root/mkl
set msg "This module adds Intel compilers v$ver to various paths"
proc ModulesHelp { } {
puts stderr $msg
}
module-whatis $msg
setenv MKLROOT $mklroot
prepend-path MANPATH $root/man/en_US
prepend-path INTEL_LICENSE_FILE $root/licenses
prepend-path NLSPATH $root/compiler/lib/$arch/locale/en_US:$root/ipp/lib/$arch/locale/en_US:$root/mkl/lib/$arch/locale/en_US:$root/debugger/lib/$arch/locale/en_US
prepend-path LIBRARY_PATH $root/compiler/lib/$arch:$root/ipp/lib/$arch:$root/mkl/lib/$arch:$root/tbb/lib/$arch/gcc4.1
prepend-path MIC_LD_LIBRARY_PATH $root/compiler/lib/mic:$root/mkl/lib/mic:$root/tbb/lib/mic
prepend-path LD_LIBRARY_PATH $root/compiler/lib/$arch:$root/mpirt/lib/$arch:$root/ipp/lib/$arch:$root/mkl/lib/$arch:$root/tbb/lib/$arch/gcc4.1
prepend-path PATH $root/bin/$arch:$root/mpirt/bin/$arch:$root/bin/${arch}_mic:$root/debugger/gui/$arch
prepend-path CPATH $root/mkl/include:$root/tbb/include
prepend-path INCLUDE $root/mkl/include
prepend-path IPPROOT $root/ipp
prepend-path TBBROOT $root/tbb
prepend-path IDB_HOME $root/bin/$arch</code></pre>
<h2>Build OpenMPI 1.6.4</h2>
First, build the latest OpenMPI with the Intel compilers. This version of OpenMPI will reside in the Gromacs 4.6.1 install path.
<ol>
<li>Create the install base directory
<pre class="brush:bash">cd /share/apps/gromacs/4.6.1/build
wget http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.4.tar.bz2
</code></pre>
<li>Create the build script for Intel compilers: /share/apps/gromacs/4.6.1/build/build-openmpi-intel.sh
<pre class="brush:bash">#!/bin/sh
. /etc/profile.d/modules.sh
ver=1.6
rel=1.6.4
grover=4.6.1
src="openmpi-${rel}.tar.bz2"
url="http://www.open-mpi.org/software/ompi/v${ver}/downloads/${src}"
compiler="intel"
basedir="/share/apps/gromacs/${grover}/openmpi"
prefix="${basedir}/${rel}-${compiler}"
if [ -d $prefix ]; then
echo "Installation directory already exists, exiting to prevent overwrite: $prefix";
exit 1
else
echo "Verified installation directory does not exist: $prefix"
fi
if [ ! -f $src ]; then
echo "Downloading source code from: $url";
wget $url;
else
echo "Using existing source code: $src";
fi
if [ -d "openmpi-${rel}" ]; then
echo "Removing existing build directory ./openmpi-${rel}"
rm -rf openmpi-${rel};
fi
echo "Extracting $src"
tar -jxf $src
cd openmpi-${rel}
echo "Building OpenMPI $rel for Intel compilers"
echo "Running configure"
module purge
module load intel/intel-compilers-13.1.1
CC=icc CXX=icpc F77=ifort FC=ifort ./configure --with-sge --with-openib --prefix=$prefix --enable-static --enable-shared
make all install
module purge
cd ..
</code></pre>
<li>Run the compile and install
<pre class="brush:bash">./build-openmpi-intel.sh | tee openmpi-1.6.4-intel.log
</code></pre>
<li>Create the module file (based off of [http://www.levlafayette.com/node/145 this example]) in the Gromacs base directory
<pre class="brush:bash">mkdir -p /share/apps/gromacs/4.6.1/modulefiles/gromacs-openmpi-intel
vim /share/apps/gromacs/4.6.1/modulefiles/gromacs-openmpi-intel/1.6.4
</code></pre>
<pre class="brush:ruby">#%Module1.0#####################################################################
##
## $name modulefile
##
# /share/apps/gromacs/4.6.1/openmpi/1.6.4-intel/bin
# /share/apps/gromacs/4.6.1/openmpi/gromacs-openmpi-intel/1.6.4-intel/bin
set ver [lrange [split [ module-info name ] / ] 1 1 ]
set name [lrange [split [ module-info name ] / ] 0 0 ]
set loading [module-info mode load]
set subname [lrange [split $name - ] 0 0 ]
#set compiler [lrange [split $name - ] 1 1 ]
set compiler "intel"
set compilerver "13.1.1"
proc ModulesHelp { } {
puts stderr "\tThis module sets the envinronment for $name v$ver compiled\n\twith Intel v13.1.1 compilers"
}
module-whatis "Set environment variables to use $name version $ver"
if { $loading && ![ is-loaded $compiler/$compilerver ] } {
module load $compiler/$compilerver
}
set basedir /share/apps/gromacs/4.6.1/openmpi/${ver}-$compiler
prepend-path --delim " " CPPFLAGS -I$basedir/include
prepend-path --delim " " LDFLAGS -L$basedir/lib
prepend-path LD_LIBRARY_PATH $basedir/lib
prepend-path MANPATH $basedir/share/man
prepend-path PATH $basedir/bin
setenv MPI_DIR $basedir/
setenv MPI_HOME $basedir/
setenv OPENMPI_ROOT $basedir/
</code></pre>
<li>Run a test
<ul>
<li>Create the input C file /share/apps/openmpi/build/mpi-helloworld.c
<pre class="brush:c">#include <stdio>
#include <string>
#include <stddef>
#include <stdlib>
#include "mpi.h"
main(int argc, char **argv ) {
int rank, size;
int buflen = 512;
char name[buflen];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
gethostname(name, buflen);
printf( "P rank %d of %d, host %s\n", rank, size, name);
MPI_Finalize();
}
</code></pre>
<li>Compile mpi-helloworld.c
<pre class="brush:c">module load /share/apps/gromacs/4.6.1/modulefiles/gromacs-openmpi-intel/1.6.4
mpicc -o mpi-helloworld-intel mpi-helloworld.c
</code></pre>
<li>run it as a job
<pre class="brush:bash">module purge
module load /share/apps/gromacs/4.6.1/modulefiles/gromacs-openmpi-intel/1.6.4
qsub -V -b y \
-l h_rt=00:15:00 \
-l vf=256M \
-j y \
-pe openmpi\* 8 \
mpirun -n 8 /share/apps/openmpi/build/mpi-helloworld-intel
</code></pre>
</ul>
</ol>
<h2>Build Gromacs 4.6.1</h2>
Official Gromacs 4.6 <a href="http://www.gromacs.org/Documentation/Installation_Instructions">build instructions</a>
They have changed dramatically from versions 4.5 and prior.
One requirement listed is that we need CMake version 2.8 or later. EPEL provides CMake 2.6.4, so we'll need to build CMake from scratch.
<ol>
<li>Download the latest CMake from here: http://www.cmake.org/cmake/resources/software.html
<pre class="brush:bash">sudo mkdir /share/apps/cmake
sudo chown mhanby:atlab /share/apps/cmake
cd /share/apps/cmake
wget http://www.cmake.org/files/v2.8/cmake-2.8.10.2-Linux-i386.tar.gz
tar -zxf cmake-2.8.10.2-Linux-i386.tar.gz
</code></pre>
<li>Create the build directory and download the source
<pre class="brush:bash">mkdir -p /share/apps/gromacs/4.6.1/build
cd /share/apps/gromacs/4.6.1/build
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.1.tar.gz
tar -zxf gromacs-4.6.1.tar.gz
mkdir gromacs-4.6.1/build
cd gromacs-4.6.1/build
</code></pre>
<li>Create the module files
<pre class="brush:bash">mkdir /share/apps/modulefiles/gromacs-intel
vim /share/apps/modulefiles/gromacs-intel/4.6.1
</code></pre>
<pre class="brush:ruby">#%Module
set ver 4.6.1
set app "Gromacs"
set url "http://gromacs.org/"
set approot /share/apps/gromacs/$ver
set msg "This module adds $app v$ver to various paths\nWebsite: $url\n\n"
proc ModulesHelp { } {
puts stderr $msg
}
module-whatis $msg
module use --append $approot/modulefiles
module load intel/13.1.1
#module load $approot/modulefiles/gromacs-openmpi-intel/1.6.4
module load gromacs-openmpi-intel/1.6.4
setenv GROMACSHOME $approot
#setenv MKL_HOME $::env(MKLROOT)
prepend-path PATH $approot/bin
prepend-path LD_LIBRARY_PATH $approot/lib
prepend-path MANPATH $approot/man
prepend-path MODULEPATH $approot/modulefiles
prepend-path PKG_CONFIG_PATH $approot/lib/pkgconfig
</code></pre>
<li>The next steps will build single and double precision in serial and parallel
<ul>
<li>Serial single precision
<ol>
<li>Run cmake to configure the build
<pre class="brush:bash">module purge
module load gromacs-intel/4.6.1
export PATH=/share/apps/cmake/cmake-2.8.10.2-Linux-i386/bin:$PATH
cd /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
rm -rf *
cmake .. \
-DCMAKE_C_COMPILER=icc \
-DCMAKE_CXX_COMPILER=icpc \
-DGMX_BUILD_OWN_FFTW=ON \
-DCMAKE_INSTALL_PREFIX=/share/apps/gromacs/4.6.1 \
-DREGRESSIONTEST_DOWNLOAD=ON
</code></pre>
<li>View the configuration
<pre class="brush:bash">ccmake ..
BUILD_SHARED_LIBS ON
CMAKE_BUILD_TYPE Release
CMAKE_INSTALL_PREFIX /share/apps/gromacs/4.6.1
CMAKE_PREFIX_PATH
CUDA_HOST_COMPILER /share/apps/intel/parallel_studio_xe_2013_update3/composer_xe_2013.3.163/bin/intel64/icc
GMX_CPU_ACCELERATION SSE4.1
GMX_DEFAULT_SUFFIX ON
GMX_DOUBLE OFF
GMX_FFT_LIBRARY fftw3
GMX_GPU OFF
GMX_GSL OFF
GMX_MPI OFF
GMX_OPENMP ON
GMX_QMMM_PROGRAM none
GMX_THREAD_MPI ON
GMX_X11 OFF
REGRESSIONTEST_DOWNLOAD OFF
</code></pre>
<li>make
<pre class="brush:bash">make
</code></pre>
<li>Run the tests
<pre class="brush:bash">make check
[ 70%] Built target gmx
[ 71%] Built target gmxfftw
[ 80%] Built target md
[ 92%] Built target gmxana
[ 92%] Built target editconf
[ 96%] Built target gmxpreprocess
[ 97%] Built target grompp
[ 97%] Built target pdb2gmx
[ 97%] Built target gmxcheck
[100%] Built target mdrun
[100%] Built target gmxtests
Test project /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
Start 1: regressiontests/simple
1/5 Test #1: regressiontests/simple ........... Passed 72.17 sec
Start 2: regressiontests/complex
2/5 Test #2: regressiontests/complex .......... Passed 868.76 sec
Start 3: regressiontests/kernel
3/5 Test #3: regressiontests/kernel ........... Passed 583.48 sec
Start 4: regressiontests/freeenergy
4/5 Test #4: regressiontests/freeenergy ....... Passed 695.84 sec
Start 5: regressiontests/pdb2gmx
5/5 Test #5: regressiontests/pdb2gmx .......... Passed 379.29 sec
100% tests passed, 0 tests failed out of 5
Total Test time (real) = 2599.69 sec
[100%] Built target check
</code></pre>
<li>Install
<pre class="brush:bash">make install
...
-- Installing: /share/apps/gromacs/4.6.1/lib/pkgconfig/libgmxana.pc
-- Installing: /share/apps/gromacs/4.6.1/bin/GMXRC
-- Installing: /share/apps/gromacs/4.6.1/bin/GMXRC.bash
-- Installing: /share/apps/gromacs/4.6.1/bin/GMXRC.zsh
-- Installing: /share/apps/gromacs/4.6.1/bin/GMXRC.csh
-- Installing: /share/apps/gromacs/4.6.1/bin/completion.csh
-- Installing: /share/apps/gromacs/4.6.1/bin/completion.bash
-- Installing: /share/apps/gromacs/4.6.1/bin/completion.zsh
-- Installing: /share/apps/gromacs/4.6.1/bin/demux.pl
-- Installing: /share/apps/gromacs/4.6.1/bin/xplor2gmx.pl
</code></pre>
</ol>
<li>Serial double precision (will append _d to applicable installed files, example, mdrun_d)
<ol>
<li>Run cmake to configure the build
<pre class="brush:bash">module purge
module load gromacs-intel/4.6.1
export PATH=/share/apps/cmake/cmake-2.8.10.2-Linux-i386/bin:$PATH
cd /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
rm -rf *
cmake .. \
-DCMAKE_C_COMPILER=icc \
-DCMAKE_CXX_COMPILER=icpc \
-DGMX_BUILD_OWN_FFTW=ON \
-DGMX_DOUBLE=ON \
-DCMAKE_INSTALL_PREFIX=/share/apps/gromacs/4.6.1 \
-DREGRESSIONTEST_DOWNLOAD=ON
</code></pre>
<li>View the configuration
<pre class="brush:bash">ccmake ..
BUILD_SHARED_LIBS ON
CMAKE_BUILD_TYPE Release
CMAKE_INSTALL_PREFIX /share/apps/gromacs/4.6.1
CMAKE_PREFIX_PATH
GMX_CPU_ACCELERATION SSE4.1
GMX_DEFAULT_SUFFIX ON
GMX_DOUBLE ON
GMX_FFT_LIBRARY fftw3
GMX_GPU OFF
GMX_GSL OFF
GMX_MPI OFF
GMX_OPENMP ON
GMX_QMMM_PROGRAM none
GMX_THREAD_MPI ON
GMX_X11 OFF
REGRESSIONTEST_DOWNLOAD OFF
</code></pre>
<li>make
<pre class="brush:bash">make
</code></pre>
<li>Run the tests
<pre class="brush:bash">make check
Test project /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
Start 1: regressiontests/simple
1/5 Test #1: regressiontests/simple ........... Passed 3.33 sec
Start 2: regressiontests/complex
2/5 Test #2: regressiontests/complex .......... Passed 6.80 sec
Start 3: regressiontests/kernel
3/5 Test #3: regressiontests/kernel ........... Passed 52.89 sec
Start 4: regressiontests/freeenergy
4/5 Test #4: regressiontests/freeenergy ....... Passed 9.19 sec
Start 5: regressiontests/pdb2gmx
5/5 Test #5: regressiontests/pdb2gmx .......... Passed 22.82 sec
100% tests passed, 0 tests failed out of 5
Total Test time (real) = 95.08 sec
[100%] Built target check
</code></pre>
<li>Install
<pre class="brush:bash">make install
</code></pre>
</ol>
<li>Parallel single precision (will append _mpi to applicable installed files, example, mdrun_mpi)
<ol>
<li>Run cmake to configure the build
<pre class="brush:bash">module purge
module load gromacs-intel/4.6.1
export PATH=/share/apps/cmake/cmake-2.8.10.2-Linux-i386/bin:$PATH
cd /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
rm -rf *
cmake .. \
-DCMAKE_C_COMPILER=icc \
-DCMAKE_CXX_COMPILER=icpc \
-DGMX_BUILD_OWN_FFTW=ON \
-DGMX_MPI=ON \
-DCMAKE_INSTALL_PREFIX=/share/apps/gromacs/4.6.1 \
-DREGRESSIONTEST_DOWNLOAD=ON
</code></pre>
<li>View the configuration
<pre class="brush:bash">ccmake ..
BUILD_SHARED_LIBS ON
CMAKE_BUILD_TYPE Release
CMAKE_INSTALL_PREFIX /share/apps/gromacs/4.6.1
CMAKE_PREFIX_PATH
CUDA_HOST_COMPILER /share/apps/intel/parallel_studio_xe_2013_update3/composer_xe_2013.3.163/bin/intel64/icc
GMX_CPU_ACCELERATION SSE4.1
GMX_DEFAULT_SUFFIX ON
GMX_DOUBLE OFF
GMX_FFT_LIBRARY fftw3
GMX_GPU OFF
GMX_GSL OFF
GMX_MPI ON
GMX_OPENMP ON
GMX_QMMM_PROGRAM none
GMX_THREAD_MPI OFF
GMX_X11 OFF
MPI_EXTRA_LIBRARY /usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;
MPI_LIBRARY /share/apps/gromacs/4.6.1/openmpi/1.6.4-intel/lib/libmpi.so
REGRESSIONTEST_DOWNLOAD OFF
</code></pre>
<li>make
<pre class="brush:bash">make
</code></pre>
<li>Run the tests
<pre class="brush:bash">make check
Test project /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
Start 1: regressiontests/simple
1/5 Test #1: regressiontests/simple ........... Passed 22.68 sec
Start 2: regressiontests/complex
2/5 Test #2: regressiontests/complex .......... Passed 29.49 sec
Start 3: regressiontests/kernel
3/5 Test #3: regressiontests/kernel ........... Passed 222.08 sec
Start 4: regressiontests/freeenergy
4/5 Test #4: regressiontests/freeenergy ....... Passed 19.17 sec
Start 5: regressiontests/pdb2gmx
5/5 Test #5: regressiontests/pdb2gmx .......... Passed 75.39 sec
100% tests passed, 0 tests failed out of 5
Total Test time (real) = 368.83 sec
[100%] Built target check
</code></pre>
<li>Install
<pre class="brush:bash">make install
</code></pre>
</ol>
<li>Parallel double precision (will append _mpi_d to applicable installed files, example, mdrun_mpi_d)
<ol>
<li>Run cmake to configure the build
<pre class="brush:bash">module purge
module load gromacs-intel/4.6.1
export PATH=/share/apps/cmake/cmake-2.8.10.2-Linux-i386/bin:$PATH
cd /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
rm -rf *
cmake .. \
-DCMAKE_C_COMPILER=icc \
-DCMAKE_CXX_COMPILER=icpc \
-DGMX_BUILD_OWN_FFTW=ON \
-DGMX_MPI=ON \
-DGMX_DOUBLE=ON \
-DCMAKE_INSTALL_PREFIX=/share/apps/gromacs/4.6.1 \
-DREGRESSIONTEST_DOWNLOAD=ON
</code></pre>
<li>View the configuration
<pre class="brush:bash">ccmake ..
BUILD_SHARED_LIBS ON
CMAKE_BUILD_TYPE Release
CMAKE_INSTALL_PREFIX /share/apps/gromacs/4.6.1
CMAKE_PREFIX_PATH
GMX_CPU_ACCELERATION SSE4.1
GMX_DEFAULT_SUFFIX ON
GMX_DOUBLE ON
GMX_FFT_LIBRARY fftw3
GMX_GPU OFF
GMX_GSL OFF
GMX_MPI ON
GMX_OPENMP ON
GMX_QMMM_PROGRAM none
GMX_THREAD_MPI OFF
GMX_X11 OFF
MPI_EXTRA_LIBRARY /usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so
MPI_LIBRARY /share/apps/gromacs/4.6.1/openmpi/1.6.4-intel/lib/libmpi.so
REGRESSIONTEST_DOWNLOAD OFF
</code></pre>
<li>make
<pre class="brush:bash">make
</code></pre>
<li>Run the tests
<pre class="brush:bash">make check
Test project /share/apps/gromacs/4.6.1/build/gromacs-4.6.1/build
Start 1: regressiontests/simple
1/5 Test #1: regressiontests/simple ........... Passed 22.65 sec
Start 2: regressiontests/complex
2/5 Test #2: regressiontests/complex .......... Passed 32.25 sec
Start 3: regressiontests/kernel
3/5 Test #3: regressiontests/kernel ........... Passed 232.23 sec
Start 4: regressiontests/freeenergy
4/5 Test #4: regressiontests/freeenergy ....... Passed 21.13 sec
Start 5: regressiontests/pdb2gmx
5/5 Test #5: regressiontests/pdb2gmx .......... Passed 82.39 sec
100% tests passed, 0 tests failed out of 5
Total Test time (real) = 390.67 sec
[100%] Built target check
</code></pre>
<li>Install
<pre class="brush:bash">make install
</code></pre>
</ol>
</ul>
</ol>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com3tag:blogger.com,1999:blog-5521735836171938846.post-37758882572327093732013-02-27T12:02:00.002-06:002013-02-27T12:02:25.393-06:00Ubuntu 13.04 on Google Nexus 7Canonical has recently <a href="http://reviews.cnet.com/8301-19736_7-57570119-251/canonical-cracks-the-slate-code-brings-ubuntu-to-nexus-tablets/">released a developer preview of Ubuntu 13.04</a> for Google Nexus devices.<div>
<br /></div>
<div>
I figured, what the heck, may as well try it out on my Google Nexus 7. Following <a href="https://wiki.ubuntu.com/Nexus7/Installation">these instructions</a>, the installation went off without a hitch.</div>
<div>
<br /></div>
<div>
I used a laptop running Ubuntu 12.10 to perform the installation.</div>
<div>
<br /></div>
<div>
Following the installation, I ran into the issue during <a href="https://wiki.ubuntu.com/Nexus7/Installation#First_boot">First Boot</a> where the on screen keyboard didn't work. I don't have an OTG adapter cable, but do have a Bluetooth Logitech Android tablet keyboard. As stated in the instructions, the Bluetooth keyboard doesn't work during first boot.</div>
<div>
<br /></div>
<div>
That is, initially :-)</div>
<div>
<br /></div>
<div>
I got around this by clicking the Gear icon in the upper right corner and clicking System Settings -> Bluetooth. I clicked the add button and put the keyboard into connect mode. The keyboard successfully synced with the tablet.</div>
<div>
<br /></div>
<div>
Great, however now I couldn't find a way to close the System Settings window. I could move it around, but couldn't get the focus away and back to the First Boot screen.</div>
<div>
<br /></div>
<div>
No problem, click the Gear icon again and select restart. Upon reboot, the Bluetooth keyboard works fine on the First Boot windows.</div>
Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-12397984333259706322013-01-23T09:46:00.001-06:002013-01-23T09:46:40.895-06:00How To: Recursively Create Numeric DirectoriesI was asked by a colleague how to do the following:
<br />
<pre class="source-code"><code>
I have a root folder that I need to create a directory based on job numbers.
From the root folder, I'd like to create the following:
1000
1000\1100
1000\1100\1101
1000\1100\1102
all the way up to
9000\9900\9999
To explain, I'll use the 5000 folder:
I would need:
5000
5000\5100
5000\5100\5101 through 5000\5100\5199
5000\5200
5000\5200\5201 through 5200\5200\5299
and so on.
What would be the easiest way?
</code></pre>
<br />
The solution isn't a big deal for anyone familiar with scripting, but I figured I'd share here in case other none coders need help with a similar problem.<br />
<br />
I wasn't sure whether they were on a Windows computer (figured most likely based on the slashes) or Linux / Mac, so I provided both a Bash and Windows command shell solutions.<br />
<br />
There are any number of ways to approach this, I chose nested for loops.<br />
<h4>
First the Bash solution:
</h4>
<pre class="source-code"><code>for x in {1..9}; do
mkdir ${x}000
cd ${x}000
for y in {1..9}; do
mkdir ${x}${y}00
cd ${x}${y}00
for ((z=1; z<=99; z+=1)); do
mkdir `printf "%s%s%02d" $x $y $z`
done
cd ..
done
cd ..
done
</code></pre>
<h4>
Now the Windows solution:
</h4>
<pre class="source-code"><code>setlocal EnableDelayedExpansion
for /l %x in (1, 1, 9) do (
mkdir %x000
cd %x000
for /l %y in (1, 1, 9) do (
mkdir %x%y00
cd %x%y00
for /l %z in (1, 1, 99) do (
if %z LSS 10 (
mkdir %x%y0%z
) else (
mkdir %x%y%z
)
)
cd ..
)
cd ..
)
</code></pre>
Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-71266823869859277312012-06-20T11:59:00.001-05:002012-06-20T11:59:35.223-05:00Build Firewall Builder 'FWbuilder' 5.1.0 for Fedora 17Howdy,
<a href="http://www.fwbuilder.org/">FirewallBuilder</a> released version 5.1.0 without providing an officially built rpm for Fedora 17.
This post will build an rpm for Fedora 17 using the <a href="http://sourceforge.net/projects/fwbuilder/files/Current_Packages/5.1.0/fwbuilder-5.1.0.3599-1.fc16.src.rpm/download">src rpm for Fedora 16</a>
If you need help configuring the rpmbuild environment, start <a href="http://fedoraproject.org/wiki/How_to_create_an_RPM_package#Introduction">here</a>.
First, download the <a href="http://sourceforge.net/projects/fwbuilder/files/Current_Packages/5.1.0/fwbuilder-5.1.0.3599-1.fc16.src.rpm/download">fwbuilder-5.1.0.3599-1.fc16.src.rpm</a> file to your local directory.
Next, extract the src.rpm
<pre class="source-code"><code>$ cd ~/tmp/fwbuilder
$ rpm2cpio fwbuilder-5.1.0.3599-1.fc16.src.rpm | cpio -idvm
$ mv fwbuilder-5.1.0.3599.tar.gz ~/rpmbuild/SOURCES/
$ mv fwbuilder-5.1.0.3599.spec ~/rpmbuild/SPECS/
</code></pre>
A patch file is required due to the following build error:
<pre class="source-code"><code>../fwbuilder/uint128.h: In member function 'std::string uint128::to_string() const':
../fwbuilder/uint128.h:469:95: warning: format '%lX' expects argument of type 'long unsigned int', but argument 3 has type 'long long unsigned int' [-Wformat]
../fwbuilder/uint128.h:469:95: warning: format '%lX' expects argument of type 'long unsigned int', but argument 4 has type 'long long unsigned int' [-Wformat]
../fwbuilder/uint128.h:471:58: warning: format '%lX' expects argument of type 'long unsigned int', but argument 3 has type 'long long unsigned int' [-Wformat]
In file included from ThreadTools.cpp:31:0:
../fwbuilder/ThreadTools.h:168:5: error: 'ssize_t' does not name a type
ThreadTools.cpp:182:82: error: no 'ssize_t libfwbuilder::TimeoutCounter::read(int, void*, size_t) const' member function declared in class 'libfwbuilder::TimeoutCounter'
make[4]: *** [.obj/ThreadTools.o] Error 1
</code></pre>
The patch was built using the following diff (patch credit <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=674349">debian bug 674349</a>)
<pre class="source-code"><code>$ diff -u src/libfwbuilder/src/fwbuilder/ThreadTools.h.orig src/libfwbuilder/src/fwbuilder/ThreadTools.h > ~/rpmbuild/SOURCES/fwbuilder-5.1.0.3599.ssize_t.patch
</code></pre>
<pre class="source-code"><code>--- src/libfwbuilder/src/fwbuilder/ThreadTools.h.orig 2012-06-20 11:31:12.029364056 -0500
+++ src/libfwbuilder/src/fwbuilder/ThreadTools.h 2012-06-20 11:09:36.000000000 -0500
@@ -31,6 +31,7 @@
#include <time.h> //for time_t definition
#include <pthread.h>
+#include <unistd.h>
#include <string>
#include <queue>
</code></pre>
Either run through the diff to generate the patch file or copy and paste the above diff into the new patch file ~/rpmbuild/SOURCES/fwbuilder-5.1.0.3599.ssize_t.patch
Now edit the fwbuilder-5.1.0.3599.spec file to include the new patch file
<pre class="source-code"><code>--- fwbuilder-5.1.0.3599.spec.orig 2012-03-22 04:17:55.000000000 -0500
+++ fwbuilder-5.1.0.3599.spec 2012-06-20 11:51:09.901734348 -0500
@@ -20,6 +20,7 @@
Group: %{guigroup}
Url: http://www.fwbuilder.org/
Source: http://prdownloads.sourceforge.net/fwbuilder/%{name}-%{version}.tar.gz
+Patch1: fwbuilder-5.1.0.3599.ssize_t.patch
Packager: Vadim Kurland <vadim@fwbuilder.org>
Buildroot: %{_tmppath}/%{name}-%{version}-root
@@ -48,6 +49,7 @@
%prep
%setup
+%patch1 -p0
./autogen.sh
%build
</code></pre>
Then run the build
<pre class="source-code"><code>$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fwbuilder-5.1.0.3599.spec
</code></pre>
The result is the rpm containing
<pre class="source-code"><code>$ rpm -qpl ../RPMS/x86_64/fwbuilder-5.1.0.3599-1.fc17.uabeng.x86_64.rpm |more
/usr/bin/fwb_iosacl
/usr/bin/fwb_ipf
/usr/bin/fwb_ipfw
/usr/bin/fwb_ipt
/usr/bin/fwb_pf
/usr/bin/fwb_pix
/usr/bin/fwb_procurve_acl
/usr/bin/fwbedit
/usr/bin/fwbuilder
/usr/share/applications/fwbuilder.desktop
/usr/share/doc/fwbuilder-5.1.0.3599
...
</code></pre>
Install the rpm using yum
<pre class="source-code"><code>$ sudo yum install ../RPMS/x86_64/fwbuilder-5.1.0.3599-1.fc17.uabeng.x86_64.rpm
Loaded plugins: fs-snapshot, langpacks, presto, refresh-packagekit
Examining ../RPMS/x86_64/fwbuilder-5.1.0.3599-1.fc17.uabeng.x86_64.rpm: fwbuilder-5.1.0.3599-1.fc17.uabeng.x86_64
Marking ../RPMS/x86_64/fwbuilder-5.1.0.3599-1.fc17.uabeng.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package fwbuilder.x86_64 0:5.1.0.3599-1.fc17.uabeng will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================
Installing:
fwbuilder x86_64 5.1.0.3599-1.fc17.uabeng /fwbuilder-5.1.0.3599-1.fc17.uabeng.x86_64 36 M
Transaction Summary
=====================================================================================================================
Install 1 Package
</code></pre>
Hope this helps other FC17 FWbuilder users out there!Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-67855884179537892392012-06-01T12:02:00.002-05:002012-06-25T12:42:31.902-05:00HowTo: Converting EXT4 to BTRFS in Fedora 17This HowTo documents the steps I used on a <a href="https://fedoraproject.org/">Fedora 17</a> x86_64 test machine to convert the root file system from Ext4 to Btrfs.<br />
<br />
Anaconda (Fedora installer) doesn't support the creation or manipulation of Btrfs file systems in the graphical installer. Support for this is <a href="http://www.h-online.com/open/news/item/Fedora-puts-back-Btrfs-deployment-yet-again-1436704.html">slated for Fedora 18</a>, at which time, Btrfs may become the default file system!<br />
<br />
It is possible, however, to create Btrfs file systems at install time via <a href="http://fedoraproject.org/wiki/Anaconda/Kickstart#btrfs">Kickstart</a>.<br />
<br />
The virtual machine was installed with <a href="https://fedoraproject.org/">Fedora 17</a> x86_64 using default partitioning (LVM and Ext4) and selecting the Desktop profile.<br />
<br />If possible, create the partitions without LVM. (see section at the bottom for an example of this and UUID)<br />
<br />
In the examples, the root file system is a logical volume: /dev/mapper/vg_f17vm-lv_root<br />
<br />
Following the install and the initial boot, I rebooted using the <a href="http://download.fedoraproject.org/pub/fedora/linux/releases/17/Live/x86_64/Fedora-17-x86_64-Live-Desktop.iso">Fedora 17 x86_64 Live CD</a> and chose the option to <b>Try Fedora</b>.<br />
<br />
I initially tried to perform the conversion using the Fedora 17 DVD and rescue mode, however the btrfs-convert utility isn't included. The live CD, however, includes the Btrfs utilities.<br />
<br />
<h3>Convert EXT4 root</h3>
All of the steps are performed using the terminal, so open a Gnome Terminal and SU to root:<br />
<pre class="source-code"><code>$ su -
</code></pre>
<br />
Run fsck on the file system<br />
<pre class="source-code"><code># fsck.ext4 -f /dev/mapper/vg_f17vm-lv_root
</code></pre>
<br />
Convert from ext4 to btrfs. The metadata stage can take a while on a large populated file system (on an 1.5TB ext4 file system with approx 500GB of data, a good mix of large and small files, this step took about 1.5 hours)<br />
<pre class="source-code"><code># btrfs-convert /dev/mapper/vg_f17vm-lv_root
creating btrfs metadata.
creating ext2fs image file.
cleaning up system chunk.
conversion complete.
</code></pre>
<br />
Mount the freshly converted btrfs root file system so that we can make some modifications (fstab, SELinux)<br />
<pre class="source-code"><code># mkdir /mnt/btrfs
# mount -t btrfs /dev/mapper/vg_f17vm-lv_root /mnt/btrfs
</code></pre>
<br />
Edit fstab to change the root file system from ext4 to btrfs
<br />
<pre class="source-code"><code># vi /mnt/btrfs/etc/fstab
#/dev/mapper/vg_f17vm-lv_root ext4 defaults 1 1
/dev/mapper/vg_f17vm-lv_root btrfs defaults 1 1
</code></pre>
<br />
Force the system to automatically relabel the SELinux context for the root file system during boot.<br />
<pre class="source-code"><code># touch /mnt/btrfs/.autorelabel
</code></pre>
<br />
<b>NOTE:</b> In my test, the Fedora 17 system would hang with systemd library permissions errors prior to getting to the SELinux auto relabel step.<br />
<br />
To work around this, I rebooted back into the Live CD and change the SELinux policy to "permissive" from "enforcing". During the next boot, the permission errors went by and then the SELinux autorelabel ran to completion.
<br />
<pre class="source-code"><code># vi /mnt/btrfs/etc/selinux/config
#SELINUX=enforcing
SELINUX=permissive
</code></pre>
The Btrfs conversion process created a subvolume of the old ext4 file system. The following commands mount the subvolume and file system, demonstrating the power of Btrfs.<br />
<pre class="source-code"><code># mkdir /mnt/{ext2_saved,ext4}
# mount -t btrfs -o subvol=ext2_saved /dev/mapper/vg_f17vm-lv_root /mnt/ext2_saved
# mount -t ext4 -o loop,ro /mnt/ext2_saved/image /mnt/ext4
</code></pre>
<br />
The contents of the ext4 and btrfs mounts should be identical, sans the changes made above to fstab and config.<br />
<br />
It's possible to roll back the conversion by first unmounting the file systems and then rolling it back using the -r switch<br />
<pre class="source-code"><code># umount /dev/mapper/vg_f17vm-lv_root
# btrfs-convert -r /dev/mapper/vg_f17vm-lv_root
</code></pre>
<br />
If you decide to stay with btrfs, you can delete the subvolume snapshot of the original ext4 system
<br />
<pre class="source-code"><code># btrfs subvolume delete /mnt/btrfs/ext2_saved
</code></pre>
<br />
Reboot into the installed OS and the system should perform the relabel and then come up with your shiny new btrfs file system.<br />
<br />
To make things really cool, install the <a href="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Plugin_Descriptions.html">yum-plugin-fs-snapshot</a> Yum plugin. This plugin will create a btrfs subvolume of your root file system prior to installing updates. The end result, you can roll back from yum updates. Test that you can mount the snapshot of the ext4 file system<br />
<br />
Once the system is up and running, don't forget to set the SELinux policy back to "enforcing"<br />
<br />
<b>Issues:</b><br />
<ol>
<li>It appears that some of the systemd services attempt to start prior to the SELinux .autorelabel step. This results in permission errors that prevent the system from booting. Temporarily change the SELinux policy to "permissive" to work past this.</li>
<li>During boot, you may see errors logged by the systemd-fsck service failing for lv_root due to missing fsck.btrfs. I suspect that the btrfs-progs rpm failed to create a symlink of fsck.btrf pointing to /usr/sbin/btrfsck</li>
</ol>
<pre class="source-code"><code>systemd-fsck[858]: fsck: fsck.btrfs: not found
systemd-fsck[858]: fsck: error 2 while executing fsck.btrfs for /dev/mapper/vg_f17vm-lv_root</code></pre>
<br />
<h3>Convert Luks Encrypted EXT4 home</h3>
If your file system is encrypted with Luks, you'll have the additional step of decrypting the file system prior to running the conversion
<pre class="source-code"><code># cryptsetup luksOpen /dev/mapper/vg_f17vm-lv_home luks_home
# fsck.ext4 -f /dev/mapper/luks_home
# btrfs-convert /dev/mapper/luks_home
creating btrfs metadata.
creating ext2fs image file.
cleaning up system chunk.
conversion complete.
</code></pre>
<br />
<h3>Converting Non LVM Ext4 and UUID in /etc/fstab</h3>
If you don't use LVM (recommended based on everything I've ready, since it's not needed for btrfs), your /etc/fstab may mount the partitions using the UUID. After converting the file system to btrfs, the UUID will change.<br />
<br \>
These steps demonstrate how to convert the root file syste, a non LVM ext4 partition, /dev/sda2.
<pre class="source-code"><code># fsck.ext4 -f /dev/sda2
# btrfs-convert /dev/sda2
creating btrfs metadata.
creating ext2fs image file.
cleaning up system chunk.
conversion complete.
</code></pre>
Identify the UUID using blkid (can also look under /dev/disk/by-uuid
<pre class="source-code"><code># blkid /dev/sda2
/dev/sda2: UUID="56d1e02b-3120-44c3-b8f8-d3289012e447" UUID_SUB="13f7b26b-42b1-41bb-a9dc-c928ad033f6d" TYPE="btrfs"
</code></pre>
<br />
<pre class="source-code"><code># mkdir /mnt/btrfs
# mount -t btrfs /dev/sda2 /mnt/btrfs
</code></pre>
<br />
Edit fstab to change the root file system from ext4 to btrfs and replace the UUID with the new UUID
<br />
<pre class="source-code"><code># vi /mnt/btrfs/etc/fstab
#UUID=e14bd8d6-ac8a-4f15-8169-f0a9bdf4b69e / ext4 defaults 1 1
UUID=56d1e02b-3120-44c3-b8f8-d3289012e447 / btrfs defaults 1 1
UUID=f4cdf95d-66eb-4428-960d-23b289ffb63e /boot ext4 defaults 1 2
</code></pre>
<br />
Chroot to the real root file system and update grub.cfg using grub2-mkconfig
<br />
<pre class="source-code"><code>
# mount -o bind /dev /mnt/btrfs/dev
# mount -o bind /proc /mnt/btrfs/proc
# mount -o bind /sys /mnt/btrfs/sys
# mount /dev/sda1 /mnt/btrfs/boot
# chroot /mnt/btrfs
# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.4.3-1.fc17.x86_64
Found initrd image: /boot/initramfs-3.4.3-1.fc17.x86_64.img
Found linux image: /boot/vmlinuz-3.3.4-5.fc17.x86_64
Found initrd image: /boot/initramfs-3.3.4-5.fc17.x86_64.img
Warning: Please don't use old title `Fedora Linux, with Linux 3.3.4-5.fc17.x86_64' for GRUB_DEFAULT, use `Advanced options for Fedora Linux>Fedora Linux, with Linux 3.3.4-5.fc17.x86_64' (for versions before 2.00) or `gnulinux-advanced-/dev/sda2>gnulinux-3.3.4-5.fc17.x86_64-advanced-/dev/sda2' (for 2.00 or later)
done
</code></pre>
Exit out of the chroot and unmount the file systems
<br />
<pre class="source-code"><code># exit
# umount /mnt/btrfs/boot
# umount /mnt/btrfs/dev
# umount /mnt/btrfs/proc
# umount /mnt/btrfs/sys
# umount /mnt/btrfs
</pre>
<br />
Reboot and you should be good to go. The selinux relabeling can take a while!
<h3>References:</h3>
<ul>
<li><a href="https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3">https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3</a></li>
<li><a href="https://btrfs.wiki.kernel.org/index.php/Getting_started">https://btrfs.wiki.kernel.org/index.php/Getting_started</a></li>
<li><a href="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Plugin_Descriptions.html">http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Plugin_Descriptions.html</a></li>
<li><a href="http://fedoraproject.org/wiki/Anaconda/Kickstart#btrfs">http://fedoraproject.org/wiki/Anaconda/Kickstart#btrfs</a></li>
<li><a href="http://www.h-online.com/open/news/item/Fedora-puts-back-Btrfs-deployment-yet-again-1436704.html">http://www.h-online.com/open/news/item/Fedora-puts-back-Btrfs-deployment-yet-again-1436704.html</a></li>
<li><a href="http://forums.fedoraforum.org/showthread.php?t=246520">Good HowTo initially created for Fedora 13</a></li>
</ul>
<ol>
</ol>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com4tag:blogger.com,1999:blog-5521735836171938846.post-24170595836324649012012-05-04T17:42:00.001-05:002012-05-04T17:42:19.351-05:00Script to Query Dell Warranty StatusI'm sure there are more elegant ways of obtaining the warranty status for your Dell hardware, but none-the-less, here's a Ruby script I wrote that does just that :-)<br />
<br />
Script: <a href="https://github.com/flakrat/misc-scripts/blob/master/query_dell_st/lib/query_dell_st.rb">query_dell_st.rb</a><br />
<br />
The script currently has two modes, single lookup and batch lookup.<br />
<br />
Single lookup takes two arguments:<br />
<ul>
<li>--host <i>HOSTNANE</i></li>
<li>--svctag <i>SERVICE_TAG</i></li>
</ul>
Bulk requires only a single argument pointing to an input file containing a single entry "HOSTNAME SERVICE_TAG" per line for each device.<br />
<ul>
<li>--file <i>FILE</i></li>
</ul>
<b>Example single lookup:</b>
<br />
<pre class="source-code"><code>
$ ./query_dell_st.rb --svctag GT7D8P1 --host lin-srv01
Host: lin-srv01
Model: PowerEdge R410
Service Tag: GT7D8P1
Warranty Exp: 617 days left
</code></pre>
<br />
Over the next few days I'll add more features (CSV and tab delimited output, for example) as well as address bugs and error handling.<br />
<br />
Hopefully the script is useful to others.Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com2tag:blogger.com,1999:blog-5521735836171938846.post-86208768813334591772012-04-13T13:53:00.000-05:002012-04-13T13:53:28.003-05:00HowTo: Disable User List on RHEL6 / CentOS6 Login WindowQuick bit of code to disable the user list on the RHEL6 / CentOS6 login screen.
<br />
<pre class="source-code"><code>
# gconftool-2 --direct --config-source=xml:readwrite:/etc/gconf/gconf.xml.defaults --type bool --set /apps/gdm/simple-greeter/disable_user_list true
</code></pre>
<br />
Useful Gconf References:
<ul>
<li><a href="http://live.gnome.org/GDM/2.22/Configuration">http://live.gnome.org/GDM/2.22/Configuration</a></li>
<li><a href="http://svn.gnome.org/viewvc/gdm/trunk/gui/simple-greeter/gdm-simple-greeter.schemas.in?view=markup">http://svn.gnome.org/viewvc/gdm/trunk/gui/simple-greeter/gdm-simple-greeter.schemas.in?view=markup</a></li>
</ul>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com2tag:blogger.com,1999:blog-5521735836171938846.post-1143088004749590892012-03-29T13:46:00.000-05:002012-03-29T13:46:58.911-05:00Ghetto-timemachine Ruby Backup ScriptI decided to try my hand at some Ruby scripting and figured, what better way than to port one of my existing Perl scripts.<br />
<br />
The result is the Ruby <a href="https://github.com/flakrat/ghetto-timemachine/blob/master/ghetto-timemachine.rb">ghetto-timemachine.rb</a> backup script!<br />
<br />
The code is maintained here:<br />
<a href="https://github.com/flakrat/ghetto-timemachine">https://github.com/flakrat/ghetto-timemachine</a><br />
<br />
So, what does it do? In a nutshell, it performs daily backups using a combination of hard links and rsync that result in a Time Machine like backup that uses very little additional space beyond the initial backup and the size of any files that change.<br />
<br />
Example usage:<br />
<ul>
<li>Local source to local destination with comma separated list of excludes
<pre class="source-code"><code>
./ghetto-timemachine.rb --src ~/Pictures --dest /media/USB/backup --excludes \*.raw,\*.bmp
</code></pre>
</li>
<li>Local source to remote destination
<pre class="source-code"><code>
./ghetto-timemachine.rb --src ~/Pictures --dest flakrat@rathole:/backup --excludes \*.raw,\*.bmp
</code></pre>
</li>
</ul>
Nightly cron job running at 1:10AM
<br />
<pre class="source-code"><code>
10 1 * * * /home/flakrat/bin/ghetto-timemachine.rb --src ~/Pictures --dest flakrat@rathole:/backup --excludes \*.raw,\*.bmp
</code></pre>
<br />
See the <a href="https://github.com/flakrat/ghetto-timemachine/blob/master/README">README</a> for a more detailed description of the backup and rotation process.Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-89383189545636114262012-03-15T09:59:00.000-05:002012-03-15T09:59:47.483-05:00Fedora 16 Gnome Terminal Redraw Doesn't Always Completely RenderI've noticed an issue on my Fedora 16 workstation where Gnome Terminal redraws don't always completely render.<br />
<br />
For example, it could be as simple as pressing the up arrow key to scroll through the CLI history (or ESC K for the 'set -o vi' crowd). A good percentage of the time when I'd do that, the command line wouldn't completely render, CTRL C and scrolling again usually cleared it up.<br />
<br />
I also experienced the behavior a lot when in VI. Scroll or jump to a line and the text would end up being a combination of the previous page and the new location.<br />
<br />
I experienced the issue using Nouveau and binary Nvidia drivers.<br />
<br />
The following <a href="https://bugzilla.redhat.com/show_bug.cgi?id=720605#c29">Bugzilla ticket</a> provided a possible workaround. I've only just implemented the ENV variables, but after 1 day of use I have yet to experience the issue, where as before it was pretty consistent.<br />
<br />
To globally apply the settings, add the following variables to end of /etc/gdm/Xsession<br />
<pre class="source-code"><code>
# Attempting to fix the Gnome terminal redraw issue:
# https://bugzilla.redhat.com/show_bug.cgi?id=720605
export CLUTTER_DEBUG=disable-culling
export CLUTTER_PAINT=disable-clipped-redraws:disable-culling
</code></pre><br />
Reboot, log back in and hopefully the issue goes away :-)Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com1tag:blogger.com,1999:blog-5521735836171938846.post-84780357095758704712012-01-31T16:34:00.000-06:002012-01-31T16:34:20.029-06:00CentOS 6.2 Installation issue with Dell Precision Workstation T7500Attempting to kickstart a new Dell Precision Workstation T7500 with an nVidia Quadro 2000 with the CentOS6.2 DVD media resulted in a kernel panic. All signs pointed to the community built nouveau module "nouveau_probe_i2c_addr panic"<br />
<br />
I'm not so worried about the nouveau module working, since the kickstart process installs the official nVidia binary release.<br />
<br />
To work around the issue and allow the system to boot fully into the installer, simply add "rblacklist=nouveau" to the argument list:<br />
<br />
<pre class="source-code"><code>
> vmlinuz initrd=initrd.img ks=http://srv1/ks/wks-el6.cfg rdblacklist=nouveau reboot=pci
</code></pre>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-38432268778563495172012-01-18T17:02:00.003-06:002012-01-30T16:37:26.847-06:00How to Configure Nagios Check_MK to Report Number of Package Updates Need on ClientThe Check_mk Updates plugin was posted to the <a href="http://comments.gmane.org/gmane.network.nagios.checkmk/2911">Check_mk mailing list by Jonathan Mills</a>. The following Blog entry covers the steps I took to integrate it in my environment. I opted to distribute the two plugin files to the clients via puppet rather than RPM.<br />
<ul><li>OMD version: 0.51.20111117<br />
<li>Puppet version: 2.6.12<br />
<li>Server is CentOS 5.7 x86_64<br />
<li>Clients are CentOS 5.7, CentOS 6.2, RHEL5.7, Fedora 16 ix86,x86_64<br />
</ul>Refrences <ul><li>Jonathan Mills <a href="http://comments.gmane.org/gmane.network.nagios.checkmk/2911">check_mk mailing list post</a> containing the source files<br />
<li>Jonathan Mills <a href="http://lists.mathias-kettner.de/pipermail/checkmk-en/2011-September/003682.html">check_mk mailing list post</a> feedback providing additional check_mk and nagios configurations<br />
</ul>This check currently only works with Yum based clients (tested on CentOS 5 and 6, RHEL5 and Fedora 16) and requires the yum-security package (EL5) or yum-plugin-securities (EL6) The plugin attempts to identify security and non-security packages that are pending install. For RHEL, it's simple to get this info via "yum --security check-update" <pre class="source-code"><code>
$ sudo yum --security check-update
Loaded plugins: dellsysid, rhnplugin, security
Limiting package lists to security relevant ones
Needed 6 of 17 packages, for security
kernel.x86_64 2.6.18-274.17.1.el5 rhel-x86_64-client-5
kernel-devel.x86_64 2.6.18-274.17.1.el5 rhel-x86_64-client-5
kernel-headers.x86_64 2.6.18-274.17.1.el5 rhel-x86_64-client-5
libxml2.i386 2.6.26-2.1.12.el5_7.2 rhel-x86_64-client-5
libxml2.x86_64 2.6.26-2.1.12.el5_7.2 rhel-x86_64-client-5
libxml2-python.x86_64 2.6.26-2.1.12.el5_7.2 rhel-x86_64-client-5
</code></pre>For CentOS and most likely Scientific Linux, the security errata are not provided with the repos, so the above command will always report 0 security updates. This is solved in the plugin by parsing the results of the -v (verbose) output. <ol><li>Add the client side scripts to the puppet server (Puppet isn't necessary, you can install the RPM provided in the tar file on the check_mk post)<br />
<ul><li>Create the file directories under site<br />
<pre class="source-code"><code>
$ mkdir -p var/lib/puppet/files/site/etc/check_mk
$ mkdir -p var/lib/puppet/files/site/usr/lib/check_mk_agent/plugins
</code></pre><li>Create the check_updates.cfg etc file<br />
<pre class="source-code"><code>
$ vim var/lib/puppet/files/site/etc/check_mk/check_updates.cfg
</code></pre><pre class="source-code"><code>
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright Mathias Kettner 2010 mk@mathias-kettner.de |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# ails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
# check_updates.cfg
# This file configures mk_check_updates.
# interval (seconds) between runs of 'yum check-update'
INTERVAL=7200
# path to log file
LOG="/var/log/check_updates.log"
</code></pre><li>Create the mk_check_updates script (The script has updates that I made to resolve some issues related to the priorities yum plugin and yum output beginning with Keeping or Removing, so it's slightly different than the original source)<br />
<pre class="source-code"><code>
$ vim var/lib/puppet/files/site/usr/lib/check_mk_agent/plugins/mk_check_updates
</code></pre><pre class="source-code"><code>
#!/bin/bash
#
# OUTPUT:
# (security) (non-security) (runtime) (check age)
# <<<updates>>>
# 7 40 7 209
# Unix time (seconds since Unix epoch)
START=$(date +%s)
TIME=
AGE=
INTERVAL=86400 # default interval once a day
LOG="/var/log/check_updates.log" # default path to log file
# Source config file if it exists
if [ -e "/etc/check_mk/check_updates.cfg" ]; then
. /etc/check_mk/check_updates.cfg
fi
# function run_check_update
run_check_update () {
if which yum >/dev/null; then
if [ ! -e "/var/run/yum.pid" ]; then
cat /dev/null > $LOG
# Check for security RPMS
yum -v --security check-update | egrep '(i.86|x86_64|noarch)' | egrep -v '\(priority\)' |\
egrep -v '(^Keeping|^Removing|^Nothing|^Excluding|^Looking)' | sed 's/^.*--> //g' | while read L
do
RPM=$(echo $L | awk '{print $1}')
Q=$(echo ${L} | grep 'non-security' > /dev/null; echo $?)
if [ $Q -eq 0 ]; then
echo "non-security $RPM" >> $LOG
else
echo "security $RPM" >> $LOG
fi
done
fi
fi
}
# function timeyet
timeyet () {
LAST=$(stat -c '%Y' $LOG)
NOW=$(date +%s)
AGE=$((NOW - LAST))
[ $AGE -gt $INTERVAL ] && TIME=1 || TIME=0
}
# See if it's time to run 'yum check-updates' yet
if [ ! -e $LOG ]; then
touch $LOG
run_check_update
timeyet
else
timeyet
if [ $TIME = 1 ]; then
run_check_update
timeyet
fi
fi
# Gather results from log file
SEC=$(grep '^security' $LOG | wc -l)
NON=$(grep '^non-security' $LOG | wc -l)
# Unix time (seconds since epoch)
END=$(date +%s)
RUNTIME=$((END - START))
echo '<<<updates>>>'
echo $SEC" "$NON" "$RUNTIME" "$AGE
exit 0
</code></pre><li>Add the scripts to git<br />
<pre class="source-code"><code>
$ git add var/lib/puppet/files/site/usr/lib/check_mk_agent/plugins/mk_check_updates
$ git add var/lib/puppet/files/site/etc/check_mk/check_updates.cfg
$ git commit -a -m "Adding check_mk client side scripts to report yum updates"
$ git push
</code></pre><li>Add the scripts to the check_mk class to ensure that the clients get the code<br />
<pre class="source-code"><code>
$ vim etc/puppet/manifests/classes/check_mk.pp
</code></pre><pre class="source-code"><code>
# etc/puppet/manifests/classes/check_mk.pp
class check_mk {
case $operatingsystem {
"centos",
"fedora",
"redhat": {
package {["check_mk-agent", "check_mk-agent-logwatch"]:
ensure => latest,
notify => Service["xinetd"],
}
service { "xinetd":
ensure => running,
enable => true,
}
file { "/etc/check_mk/check_updates.cfg":
owner => "root",
group => "root",
mode => 755,
source => "puppet:///site/etc/check_mk/check_updates.cfg",
}
file { "/usr/lib/check_mk_agent/plugins/mk_check_updates":
owner => "root",
group => "root",
mode => 755,
source => "puppet:///site/usr/lib/check_mk_agent/plugins/mk_check_updates",
}
}
default: { }
}
}
</code></pre><li>Ensure that the check_mk class is included in the node definitions (currently included in the baseclass template)<br />
<li>Git commit the changes to check_mk.pp class and push to the git server<br />
</ul><li>Install the python script on the nagios server (note user defined checks go in local/share/check_mk/checks, if you put them into $SITE/share.... they won't survive the next OMD upgrade)<br />
<pre class="source-code"><code>
$ su - sitename
$ vim local/share/check_mk/checks/updates
</code></pre><pre class="source-code"><code>
#!/usr/bin/python
# -*- encoding: utf-8; py-indent-offset: 4 -*-
# Jonathan Mills 10/2011
# Example output from agent:
# [security] [non-security] [runtime (seconds)] [age of results (seconds)]
# <<<updates>>>
# 7 40 0 13
#
updates_default_values = (5, 20)
# inventory
def inventory_updates(checktype, info):
#if len(info) >= 1 and len(info[0]) >= 1:
# return [ (None, None) ]
inventory = []
inventory.append( (None, "updates_default_values") )
return inventory
# check
def check_updates(_no_item, params, info):
# unpack check parameters
min_num_sec, min_num_nonsec = params
for line in info:
perfdata = []
sec = int(line[0])
nonsec = int(line[1])
age = int(line[3])
infotext = "%s Security Updates, %s Non-Critical Updates (Last Checked %s seconds ago)" % (sec, nonsec, age)
perfdata.append( ( "Runtime (sec)", int(line[2]) ) )
if sec > min_num_sec:
return (2, "CRITICAL - " + infotext, perfdata)
elif nonsec > min_num_nonsec:
return (1, "WARNING - " + infotext, perfdata)
else:
return (0, "OK - " + infotext, perfdata)
# declare the check to Check_MK
check_info['updates'] = (check_updates, "Updates", 1, inventory_updates)
</code></pre><li>Add a new time period 'nightly' to nagios that can be used to limit this check to running daily from 3AM to 4AM<br />
<pre class="source-code"><code>
$ vim etc/nagios/conf.d/timeperiods.cfg
</code></pre><pre class="source-code"><code>
###############################################################################
# TIMEPERIODS.CFG - SAMPLE TIMEPERIOD DEFINITIONS
#
# NOTES: This config file provides you with some example timeperiod definitions
# that you can reference in host, service, contact, and dependency
# definitions.
#
# You don't need to keep timeperiods in a separate file from your other
# object definitions. This has been done just to make things easier to
# understand.
#
###############################################################################
# This defines a timeperiod where all times are valid for checks,
# notifications, etc. The classic "24x7" support nightmare. :-)
define timeperiod{
timeperiod_name 24x7
alias 24 Hours A Day, 7 Days A Week
sunday 00:00-24:00
monday 00:00-24:00
tuesday 00:00-24:00
wednesday 00:00-24:00
thursday 00:00-24:00
friday 00:00-24:00
saturday 00:00-24:00
}
# 'workhours' timeperiod definition
define timeperiod{
timeperiod_name workhours
alias Normal Work Hours
monday 08:00-17:00
tuesday 08:00-17:00
wednesday 08:00-17:00
thursday 08:00-17:00
friday 08:00-17:00
}
# 'none' timeperiod definition
define timeperiod{
timeperiod_name none
alias No Time Is A Good Time
}
# 'nightly' timeperiod definition
define timeperiod{
timeperiod_name nightly
alias Nightly Check
sunday 03:00-04:00 ; Every Sunday of every week
monday 03:00-04:00 ; Every Monday of every week
tuesday 03:00-04:00 ; Every Tuesday of every week
wednesday 03:00-04:00 ; Every Wednesday of every week
thursday 03:00-04:00 ; Every Thursday of every week
friday 03:00-04:00 ; Every Friday of every week
saturday 03:00-04:00 ; Every Saturday of every week
}
</code></pre><li>Add the new check to check_mk main.mk file<br />
<pre class="source-code"><code>
$ vim etc/check_mk/main.mk
</code></pre><pre class="source-code"><code>
# check-updates (OMD 0.52 requires user defined vars to prepend and underscore)
_updates_default_values = (6, 20) # check-updates: critical when 6 or more sec updates, warning when 20 or more non-sec updates
extra_service_conf["check_period"] = [
( "nightly", ALL_HOSTS, [ "Updates" ] ), # check-updates: Only check for updates from 3 to 4AM as set in timeperiods.cfg
]
extra_host_conf["max_check_attempts"] = [
( "1", ALL_HOSTS, [ "Updates" ] ), # check-updates: Only check for updates once
]
# Enable notifications for specific services
extra_service_conf["notifications_enabled"] = [
( "1", ALL_HOSTS, ["Check_MK"]),
( "0", ALL_HOSTS, ["Updates"]), # check-updates: Don't notify for security OS updates
( "1", ALL_HOSTS, ["Memory used"]),
( "1", ALL_HOSTS, ["IPMI Sensor Summary","fs_*"]),
( "1", ["linsrv"], ["IPMI Sensor Summary","ambient_temp"]),
( "1", ALL_HOSTS, ["Multipath *"]),
( "1", ["kvm"], ALL_HOSTS, ["CPU load"]),
( "1", ["kvm"], ALL_HOSTS, ["CPU utilization"]),
( "1", ["mailsrv"], ["Postfix Queue"]),
( "1", ["linsrv"], ALL_HOSTS, ["Dell OMSA"]),
( "0", ALL_HOSTS, ALL_SERVICES), # and disable notifications for everything else
]
service_groups = [
( "updates", ALL_HOSTS, [ "Updates" ] ), # check-updates: Create updates service group to make viewing in web interface easier
]
define_servicegroups = {
"updates" : "RHEL/CentOS Yum Updates", # check-updates: Can now statically link to a service group web view: http://nagios.server/sitename/check_mk/view.py?view_name=servicegroup&servicegroup=updates
}
</code></pre><li>Rerun the inventory for the nodes<br />
<pre class="source-code"><code>
$ check_mk -II node-01
...
or for all nodes
$ check_mk -II
</code></pre><li>Reload the services<br />
<pre class="source-code"><code>
$ check_mk -O
</code></pre><li>Check the web page for the nodes, alternatively you can go straight to the Updates overview page:<br />
<a href="https://nagios.server/sitename/check_mk/view.py?view_name=servicegroup&servicegroup=updates">https://nagios.server/sitename/check_mk/view.py?view_name=servicegroup&servicegroup=updates</a><br />
</ol>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com7tag:blogger.com,1999:blog-5521735836171938846.post-86832301712856088742011-12-16T09:16:00.000-06:002011-12-16T09:16:05.476-06:00VM Guests Take Forever to Shutdown in VMware Workstation 8I recently had a situation where a user purchased a new i7 + SSD based laptop. Needless to say, the laptop is fast.<br />
<br />
As part of the purchase, the user also upgraded to the latest VMware Workstation, version 8.x. The users RHEL4 guest was transferred over from the old laptop (running an older version of Workstation). It powered up and ran just fine on the new hardware / version of VMware, with the exception of the guest shutdown.<br />
<br />
It literally took an hour from the point where the RHEL4 issued the halt to the underlying hardware (i.e. the OS portion of the shutdown is done) to a fully stopped virtual guest. From an appearance stand point, the VM screen turns black and remains that way until it fully stops.<br />
<br />
The laptop hard drive indicator was also flashing madly during the entire process.<br />
<br />
I found suggestions to prevent virus scanners from scanning .vmdk files, that didn't change the behaviour.<br />
<br />
The laptop also uses PGP whole disk encryption. Perhaps Workstation 8 and PGP don't play nicely together? I couldn't find any references. The old laptop (much slower than this) also had PGP with the older VMware Workstation, so PGP didn't seem to be a prime candidate.<br />
<br />
I eventually discovered <a href="http://communities.vmware.com/thread/165207">this post</a> on the VMware message boards that provided the solution.<br />
<br />
The thread discusses a similar issue happening in VMware Workstation 6.0.4 on Windows XP.<br />
<br />
The solution was to add the following lines to either the global VMware config.ini file, or each individual guests .vmx file. <b>Exit out of VMware Workstation before modifying config.ini or the .vmx files</b><br />
<br />
<pre class="source-code"><code>
prefvmx.minVmMemPct = "100"
mainMem.useNamedFile = "FALSE"
mainMem.partialLazySave = "FALSE"
mainMem.partialLazyRestore = "FALSE"
</code></pre><br />
With that code added to the configuration file, the virtual machine shuts down immediately after the operating system issues the halt.<br />
<br />
Placing the code in the config.ini file affects all virtual machines, new and existing, unless the settings are overridden in the individual .vmx files.<br />
<br />
For Windows 7, the config.ini file can be found here<br />
<b>C:\ProgramData\VMware\VMware Workstation\config.ini</b><br />
<br />
For Windows XP it can be found here:<br />
<b>C:\Documents and Settings\All Users\Application Data\VMware\VMware Workstation\config.ini</b>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com25tag:blogger.com,1999:blog-5521735836171938846.post-78524706309585096012011-11-18T18:25:00.000-06:002011-11-18T18:25:24.932-06:00Dell Optiplex 790 Workstations hang while rebooting with CentOS 6I'm working on deploying a large number of Dell Optiplex 790 workstations using kickstart and CentOS 6.<br />
<br />
During the initial testing I found that the 790's wouldn't completely reboot with CentOS 6 installed or booted into the install media. They'd get as far as "Restarting".<br />
<br />
The solution is to pass an option to the kernel:<br />
<pre class="source-code"><code>
reboot=pci
</code></pre><br />
This can be added manually to the grub configuration file for systems already installed. For kickstarting:<br />
<br />
1. Add the option in your kickstart file<br />
<pre class="source-code"><code>
bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet reboot=pci" --md5pass=$1$.xxxxx
</code></pre><br />
2. During the initial boot off of the CD/DVD, press TAB to alter to boot options (this is all one continuous line broken into multiple for readability)<br />
<pre class="source-code"><code>
> vmlinuz initrd=initrd.img ks=http://192.168.1.5/ks/el6/wks1.cfg
ip=192.168.1.100 netmask=255.255.255.0 gateway=192.168.1.1 nameserver=192.168.1.1
ksdevice=eth0 reboot=pci
</code></pre>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com0tag:blogger.com,1999:blog-5521735836171938846.post-33163931406388665642011-11-10T11:34:00.000-06:002011-11-10T11:34:57.510-06:00Fedora 16 does not Boot if /boot is on Software RAIDIn previous versions of Fedora, you could configure /boot to exist on a software RAID device (say a software mirror), however in Fedora 16 this will result in failure to boot. This wasn't a supported configuration, but it used to work.<br />
<br />
This is a known "<a href="http://fedoraproject.org/wiki/Common_F16_bugs#Cannot_boot_with_.2Fboot_partition_on_a_software_RAID_array">issue</a>" and is explained as follows:<br />
<br />
<blockquote>Cannot boot with /boot partition on a software RAID array<br />
link to this item - Bugzilla: #750794<br />
<br />
Attempting to boot after installing Fedora 16 with the /boot partition on a software RAID array will fail, as the software RAID modules for the grub2 bootloader are not installed. Having the /boot partition on a RAID array has never been a recommended configuration for Fedora, but up until Fedora 16 it has usually worked.<br />
<br />
To work around this issue, do not put the /boot partition on the RAID array. Create a simple BIOS boot partition and a /boot partition on one of the disks, and place the other system partitions on the RAID array. Alternatively, you can install the appropriate grub2 modules manually from anaconda's console before rebooting from the installer, or from rescue mode. Edit the file /mnt/sysimage/boot/grub2/grub.cfg and add the lines:<br />
<br />
insmod raid<br />
insmod mdraid09<br />
insmod mdraid1x<br />
Now run these commands:<br />
<br />
chroot /mnt/sysimage<br />
grub2-install /dev/sda<br />
grub2-install /dev/sdb<br />
Adjust the device names as appropriate to the disks used in your system.</blockquote><br />
I had a system where I'd created a mirror for /boot that had been reinstalled from Fedora 13, 14, 15 and now 16. As reported, it failed to boot following the F16 install.<br />
<br />
Destroying the mirror and creating a simple /dev/sda2 partition for /boot got it booting.Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com3tag:blogger.com,1999:blog-5521735836171938846.post-67365596544864232032011-11-04T12:25:00.002-05:002011-12-21T14:52:41.859-06:00Using check_dell_bladechassis with check_mkThis post builds off of a <a href="http://flakrat.blogspot.com/2011/03/using-checkopenmanage-with-checkmk.html">previous post</a> that documented getting <a href="http://folk.uio.no/trondham/software/check_openmanage.html">check_openmanage</a> working with check_mk.<br />
<br />
In this post we'll add <a href="http://folk.uio.no/trondham/software/check_dell_bladechassis.html">check_dell_bladechassis</a> to the mix to allow for monitoring of Dell M1000e blade chassis' (via the CMC management card).<br />
<br />
This was done on the following system:<br />
<ul><li>CentOS 5.7 x86_64 KVM virtual machine<br />
<li><a href="http://omdistro.org/">OMD 0.50</a><br />
<li><a href="http://folk.uio.no/trondham/software/check_openmanage.html">check_openmanage 3.7.3</a><br />
<li><a href="http://folk.uio.no/trondham/software/check_dell_bladechassis.html">check_dell_bladechassis 1.0.0</a><br />
<li>Dell M1000e chassis are queried via SNMP<br />
</ul>Unless otherwise specified all paths are relative to the site owners home (ex: /opt/omd/sites/mysite) The check_openmanage code in this blog post is not necessary to get check_dell_bladechassis, I'm just including it to help tie this entry to the <a href="http://flakrat.blogspot.com/2011/03/using-checkopenmanage-with-checkmk.html">previous post</a>. <ol><li>Change users on your OMD server to the site user: $ su - mysite<br />
<li>Download the latest check_dell_bladechassis from <a href="http://folk.uio.no/trondham/software/check_dell_bladechassis.html">http://folk.uio.no/trondham/software/check_dell_bladechassis.html</a> to ~/tmp and extract<br />
<li>copy the check_dell_bladechassis script to local/lib/nagios/plugins (this defaults to $USER2$ in your commands)<br />
<pre class="source-code"><code>
$ cp tmp/check_dell_bladechassis-1.0.0/check_dell_bladechassis local/lib/nagios/plugins/
$ chmod +x local/lib/nagios/plugins/check_dell_bladechassis
</code></pre><li>copy the PNP4Nagios template<br />
<pre class="source-code"><code>
$ cp tmp/check_dell_bladechassis-1.0.0/check_dell_bladechassis.php etc/pnp4nagios/templates/
</code></pre><li>Test check_dell_bladechassis to see that it can successfully query an M1000e CMC (I've inserted carriage returns in the output to make it more readable)<br />
<pre class="source-code"><code>
local/lib/nagios/plugins/check_dell_bladechassis -H dell-m1000e-01 -p -C MySecretCommunity
OK - System: 'PowerEdge M1000e', SN: 'XXXXXX', Firmware: '3.03', hardware working fine|
'total_watt'=1500.000W;0;7928.000 'total_amp'=6.750A;0;0 'volt_ps1'=239.500V;0;0
'volt_ps2'=242.750V;0;0 'volt_ps3'=242.750V;0;0 'volt_ps4'=241.750V;0;0 'volt_ps5'=241.750V;0;0
'volt_ps6'=242.750V;0;0 'amp_ps1'=1.688A;0;0 'amp_ps2'=1.641A;0;0 'amp_ps3'=0.188A;0;0
'amp_ps4'=1.516A;0;0 'amp_ps5'=1.500A;0;0 'amp_ps6'=0.219A;0;0
</code></pre><li>Edit the main.mk file to define the command, etc... (the perfdata_format and monitoring_host I got from a previous emailer to the list, not sure if they are needed)<br />
<pre class="source-code"><code>
all_hosts = [
'dell-m1000e-01|snmp|m1000e|nonpub',
'dell-r710-01|linsrv|kvm|omsa|nonpub',
'dell-2950-01|linsrv|omsa|nonpub',
'hp-srv-01|winsrv|smb', ]
# Are you using PNP4Nagios and MRPE checks? This will make PNP
# choose the correct template for standard Nagios checks:
perfdata_format = "pnp"
#set the monitoring host
monitoring_host = "nagios"
# SNMP Community
snmp_default_community = "someCommunityRO"
snmp_communities = [
( "MySecretCommunity", ["nonpub"], ALL_HOSTS ),
]
extra_nagios_conf += r"""
# ARG1: community string
define command {
command_name check_openmanage
command_line $USER2$/check_openmanage -H $HOSTADDRESS$ -p -C $ARG1$
}
define command {
command_name check_dell_bladechassis
command_line $USER2$/check_dell_bladechassis -H $HOSTADDRESS$ -p -C $ARG1$
}
"""
legacy_checks = [
# On all hosts with the tag 'omsa' check Dell OpenManage for status
# service description "Dell OMSA", process performance data
( ( "check_openmanage!MySecretCommunity", "Dell OMSA", True), [ "omsa" ], ALL_HOSTS ),
# similar for m1000e
( ( "check_dell_bladechassis!MySecretCommunity", "Dell Blade Chassis", True), [ "m1000e" ], ALL_HOSTS ),
]
</code></pre><li>That should be it, reinventory your M1000e and reload<br />
<pre class="source-code"><code>
$ check_mk -II dell-m1000e-01
$ check_mk -O
</code></pre><li>The php code has a bug that can be fixed using the below patch (see the first comment for details)<br />
<pre class="source-code"><code>
--- a/check_dell_bladechassis.php 2009-08-04 07:00:15.000000000 -0500
+++ b/check_dell_bladechassis.php 2011-12-21 14:44:25.488132187 -0600
@@ -41,7 +41,7 @@
$opt[$count] = "--slope-mode --vertical-label \"$vlabel\" --title \"$def_title: $title\" ";
- $def[$count] .= "DEF:var$i=$rrdfile:$DS[$i]:AVERAGE " ;
+ $def[$count] = "DEF:var$i=$rrdfile:$DS[$i]:AVERAGE " ;
$def[$count] .= "AREA:var$i#$PWRcolor:\"$NAME[$i]\" " ;
$def[$count] .= "LINE:var$i#000000: " ;
@@ -62,7 +62,7 @@
$opt[$count] = "-X0 --lower-limit 0 --slope-mode --vertical-label \"$vlabel\" --title \"$def_title: $title\" ";
- $def[$count] .= "DEF:var$i=$rrdfile:$DS[$i]:AVERAGE " ;
+ $def[$count] = "DEF:var$i=$rrdfile:$DS[$i]:AVERAGE " ;
$def[$count] .= "AREA:var$i#$AMPcolor:\"$NAME[$i]\" " ;
$def[$count] .= "LINE:var$i#000000: " ;
@@ -75,6 +75,7 @@
if(preg_match('/^volt_/',$NAME[$i])){
if ($visited_volt == 0) {
++$count;
+ $def[$count] = '';
$visited_volt = 1;
}
@@ -87,6 +88,7 @@
$def[$count] .= "DEF:var$i=$rrdfile:$DS[$i]:AVERAGE " ;
$def[$count] .= "LINE:var$i#".$colors[$v++].":\"$NAME[$i]\" " ;
+
$def[$count] .= "GPRINT:var$i:LAST:\"%3.2lf $UNIT[$i] last \" ";
$def[$count] .= "GPRINT:var$i:MAX:\"%3.2lf $UNIT[$i] max \" ";
$def[$count] .= "GPRINT:var$i:AVERAGE:\"%3.2lf $UNIT[$i] avg \\n\" ";
@@ -96,6 +98,7 @@
if(preg_match('/^amp_/',$NAME[$i])){
if ($visited_amp == 0) {
++$count;
+ $def[$count] = '';
$visited_amp = 1;
}
</code></pre></ol><br />
Hope this helps, and comments are welcome.Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com6tag:blogger.com,1999:blog-5521735836171938846.post-82207059812449159492011-06-23T12:22:00.001-05:002011-09-28T11:00:38.380-05:00HowTo - Selectively enable service notifications in Check_mk / OMD<a href="http://mathias-kettner.de/check_mk.html">Check_mk</a> installed via <a href="http://omdistro.org/">Open Monitoring Distribution</a> are an extremely powerful combination for monitoring devices on a network.<br />
<br />
Notifications from <a href="http://www.nagios.org/">Nagios</a> for the services discovered by Check_mk can overwhelm your inbox / mobile phone if notification for all services is enabled (default).<br />
<br />
The following configures Nagios via Check_mk in an 'opt-in' method for service notifications using extra_service_conf in main.mk<br />
<br />
The following section in main.mk will:<br />
* Enable notifications for "IPMI Sensor Summary" to get temperature alerts<br />
* Enable notifications for "fs_*" to get alerts for all file system disk usage<br />
* Enable notifications for "Memory Used" for a specific server, server1<br />
* Disabling all other service notifications<br />
<br />
<pre class="source-code"><code>
extra_service_conf["notifications_enabled"] = [
( "1", ALL_HOSTS, ["IPMI Sensor Summary","ambient_temp"]),
( "1", ALL_HOSTS, ["IPMI Sensor Summary","fs_*"]),
( "1", ["server1"], ["Memory Used"]),
( "0", ALL_HOSTS, ALL_SERVICES),
]
</code></pre>Anonymoushttp://www.blogger.com/profile/08465734610397231615noreply@blogger.com2