Tuesday, January 18, 2011

Redirect new print jobs to another queue in CUPS

We're running CUPS with about 10 different printers, and recently one of them started to jam. While we wait for tech support to bless us with a repair, is there a way to redirect all jobs from say Printer 1 to just print to Printer 2?

I know that I can use lpmove to move individual jobs, but I'd rather have something put into place to just automatically forward jobs until the printer is replaced.

  • There's probably an easier method, but what I did in a similar situation was to rename the printer, create a printer class with the same name and put inside the printer class the printer where you want the jobs redirected.

    From Daniel
  • If you feel up to reconfiguring the printer twice, you can tell the bodgy printer that it's an IPP printer pointing to the other CUPS printer you want to redirect to. Only hassle is that you have to reconfigure it back to point at the real printer once tech support sorts itself out.

    From womble

Windows PE autorun scripts

I've set up a Windows Preinstallation Environment on a USB Thumb Drive (version from the latest iteration of the Windows Automated Install Kit/WAIK), and I'm working on getting a deployment environment set up.

However, we'd like to make it so our Level 1 technicians can use the deployment system, and because of this, I've written several batch files to minimize the amount of command line work the technicians have to do.

Is there a way I can get these scripts to autorun after the PE environment has finished initializing?

  • Use the Windows Automated Installation Kit to create a custom WinPE image with a Winpeshl.ini answer file (see http://technet.microsoft.com/en-us/library/cc766156(WS.10).aspx)

    From newmanth
  • Generally I don't do autorun scripts for fear that someone would accidentally wipe their machine. The solution I found worked well was to provide a single batch file with a simple name, and tell them to type that at the command-prompt and press return. If you have multiple images for different makes/models you could call it a name based on the made model, e.g. type "dellgx280" to re-image a Dell GX280.

    You can however modify the startnet.cmd file (WinPE's equivalent of autoexec.bat) to do it if you really want. That just a case of mounting the PE image r/w, locating the file (at %systemroot%\system32), editing it, then commiting and unmounting.

    tearman : This doesn't mess with the disks, it just sets up the network configuration and launches the installer. Preferably the partitioning will be taken care of manually for both the reason you describe and differentiating systems.
    From mh
  • I am having a terrible time finding the startnet.cmd file in %systemroot%\system32. I have searched it and looks like it doesn't exist. Does anybody have any suggestions? ~ALM

  • You can find the start net.cmd file within the boot.wim file. You will need to mount the image using the command imageX /mountrw c:\boot.wim 1 c:\winmount

    From Damion

To restart a service (e.g httpd), should I use /etc/init.d httpd restart or /sbin/service httpd restart?

Whenever I need to restart Apache on my RHEL VPS, I type sudo /sbin/service httpd restart. However I notice a lot of articles suggest using sudo /etc/init.d httpd restart.

Is there any difference? If there is, when I should use each one?

  • Using /sbin/service is good because it gives the daemon a fresh environment to work in, without any potentially-annoying environment variables getting in the way and causing havoc. At least, that's the rationale for requiring it at work; I've always used the init scripts directly on my other-distro machines and it hasn't caused a visible problem, but I'm assured that there was a real problem that /sbin/service is working around. Perhaps RHEL systems just have more crap laying around in the default environment.

    From womble
  • From the man page:

    service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /.

    It then calls the init.d script. So they both accomplish the same thing, except calling the script via /sbin/service just ensures that some environment variables in your shell don't screw up the init.d script.

    From Insyte
  • My advice would be to use whatever you want whenever you're actually logged in to the machine, and use /etc/(init.d|rc.d)/daemon-name if you are scripting, the reason being the latter is generally more portable. IIRC, the only distros that come with /sbin/service in the base packages install are RH-flavored, i.e. RHEL, CentOS, Fedora. My Debian systems for example do not have this script, however this may not matter for your environment.

    Zoredache : Debian-based systems have 'invoke-rc.d' which fills the same roll as service.
    serverninja : There really should be a distribution-independent way of doing this.

using awk in a bash script with if

Hi,

I'm doing a script in which I need to test a string and based on its result I'll decide if I go further or not.

The command below works fine (if the string starts with "Clean" it will print 1, otherwise 0).

echo | awk ' {print index("'"${task}"'", "Clean")}'

What I'm trying to do is to use the AWK with IF in a BASH script. Based on this post I did the following:

idx=$(awk '{print index("'"${task}"'", "Clean")}')
echo $idx
if [ "$idx" == "0" ]; then
   echo hello
fi

As I said, when I run the script it prints "0", but the second "echo" doesn't print anything, and of course the if doesn't works either.

Can anyone help me?

TIA,

Bob

  • Awk is the wrong solution here. How about:

    if [[ "$task" =~ ^Clean ]]; then
      echo hello
    fi
    

    There's also a sneaky substitution method:

    if [ "${task/#Clean/}" != "$task" ]; then
      echo hello
    fi
    

    But I find that to be far less readable.


    Chris reckons case is tidier... let's try that out:

    case $task in
      Clean*)
        echo hello
        ;;
    esac
    

    My aesthetic sense says "hell no with bells on", and fiddling with the formatting (Clean*) echo hello;; or similar) won't help much, IMAO. If you want a conditional, use an if, I say.

    chris : You can use case for stuff like this; it's typically quite a bit tidier.
    chris : The nice thing about case statements is that they eliminate great big piles of if ; then ; elif ; then ; elif ; then ; else; fi structures, and they have better regular expression matching than test.
    womble : Well, when we get to having lots of if/elif blocks *involving the same variable*, we can switch to using a case statement. Also, case uses pathname expansion (roughly, globbing), whereas `[[ =~` uses *actual* regular expressions.
    From womble
  • Womble's answer is probably what I would go for, but you can also use grep:

    if echo cleanarf | grep -qi '^clean'; then 
       echo foo
    fi
    

    Drop the i switch to grep if you want it case sensitive (or if you do not want it case insensitive :-p ).

    womble : This is less readable, IMAO, and slower because it involves spawning a separate process.
    Kyle Brandt : Agreed, that is why I said I would opt for your answer :-P But options are good.
  • In your first example, you use echo to provide awk with a file (stdin), but in your script, there is no file for awk to process and it "hangs" waiting for input. You could use echo in your variable assignment in the $(), but I prefer to use /dev/null for this purpose (i.e. when there's no actual file to process).

    In order to avoid awkward quoting, you can use variable passing with the -v option to provide Bash variables to an awk script.

    You should find that this works:

    idx=$(awk -v task="$task" '{print index(task, "Clean")}' /dev/null)
    echo $idx
    if [ "$idx" == "0" ]; then
       echo hello
    fi
    

    You should probably use womble's suggestion, though, since it doesn't require spawning a separate process.

How do I set two gateways for one ethernet card in linux?

Hi,

How do I set two gateways for one ethernet card in linux?

Thanks a lot.

  • The answer will depend on which distribution you are using.

    Also, could you please add more detail about what you are trying to accomplish?

  • Hello,

    Take a look at the 'route' program on Linux (man route). If you are trying to accomplish static routing, something along the lines of:

    route add [-host|-net] ...[etc]
    

    What are you trying to do exactly? With some more details someone could probably lead you to exactly the solution you're looking for.

    From Michael
  • Hi

    It depends on the linux distribution you're using. under debian like systems you have to change the contents of the file /etc/network/interfaces on rhel systems you have to edit the file /etc/sysconfig/networking-scripts/ifcfg- (where is the name of your ethernet card e.g. eth0)

    you can add the entry "gateway" followed by the ip adress. as soon as you restart your network interface the routes will be added automatically to your routing table.

    Like Michael already wrote you can also just add the corresponding route instead of editing the configuration files.

    From grub
  • Let's assume you are using RedHat ES 5. Let's also assume that you want eth0 to route packets destined for 192.168.1.0/25 to route through 192.168.1.1 and packets destined for 192.168.1.128/25 to route through 192.168.1.129.

    In /etc/sysconfig/network-scripts , create a file route-eth0 . In here, put:

    192.168.1.0/25 via 192.168.1.1
    192.168.1.128/25 via 192.168.1.129
    

    Now:

    /sbin/service network restart
    

    And you should be good to go. You can check your current routing table with

    netstat -nr
    

    A good resource for RedHat can be found here:

    http://www.redhat.com/docs/en-US/Red%5FHat%5FEnterprise%5FLinux/5.4/html/Deployment%5FGuide/s1-networkscripts-static-routes.html

Does anyone know if "SATA II" drives are compatible with a Dell SAS 5i/R?

Hi. I want to buy some cheaper HDs for our server (it will have practically zero HD access) but I plan to put them into a RAID 1 config, just for peace of mind. Our server is a second-hand PowerEdge 860, with a SAS 5i/R controller on it.

The existing drives (which came with the server) are Barracuda 7200.9's which are "SATA II" (i.e they've got 3Gb/s speeds and NCQ). That would answer my question except that Seagate made them to be 100% backwards compatible, too.

I'm concerned that newer, bigger disks, that may not be 1.5Gb/s backwards compatible, will not work with the SAS 5i/R controller.

Does anyone know for sure?

  • Speed negotiation is part of the SATA initialization protocol. Any controller worth it's weight properly implements this. Some early VIA and SiS chipsets were known to fail at this, but I would expect better from the SAS 5i/R (LSI, I believe?)

    womble : Yep, rebadged LSI.
    Django Reinhardt : It is indeed an LSI (model #: UCS 51, I think).
    Django Reinhardt : Confirmed. Nautilus reports negotiation at 3.0Gb/s. Woo!
    From Kyle Smith
  • According to the documentation on Dell's website (link), SAS 5iR does support SATA, but doesn't explicitly say SATA I or II. Kyle Smith is right in terms of speed negotiation. Newer controllers should be able to handle speed negotiation. I'm willing to bet older controllers might require a jumper to enable it.

    To answer your question: I don't think anyone knows for sure unless they've run your exact setup with the 5iR and the Seagate Barracuda 7200.9. If I were a gambler, I'd personally take the chance as SATA is fairly mature and commonplace these days.

    According to Wikipedia (link), the section on "SATA 3 Gbit/s (Second generation)":

    Given the importance of backward compatibility between SATA 1.5 Gbit/s controllers and SATA 3 Gbit/s devices, SATA 3 Gbit/s autonegotiation sequence is designed to fall back to SATA 1.5 Gbit/s speed when in communication with such devices. In practice, some older SATA controllers do not properly implement SATA speed negotiation. Affected systems require the user to set the SATA 3 Gbit/s peripherals to 1.5 Gbit/s mode, generally through the use of a jumper, however some drives lack this jumper. Chipsets known to have this fault include the VIA VT8237 and VT8237R southbridges, and the VIA VT6420, VT6421A and VT6421L standalone SATA controllers.[10] SiS's 760 and 964 chipsets also initially exhibited this problem, though it can be rectified with an updated SATA controller ROM.[citation needed]

    Seeing as you're using the SAS 5iR and don't have to worry about the VIA/SiS chipsets, I'd be willing to try it out. Just my two cents.

    Django Reinhardt : Thanks for the link. The specs on Dell's website DO say it supports 3Gb/s transfer... not sure if that means it includes the so-called "SATA II".
    osij2is : Yeah, the documentation is a bit ambiguous, but considering the controller is relatively new, I'd bet on it and say it would work with the 7200.9 Barracudas.
    From osij2is

Search Keywords in MOSS

I'm setting up a new Search Center for our Intranet using MOSS and want to make heavy use of of Keywords and Best Bets. However two questions about this have me perplexed and I would appreciate any help/guidance?

1) If you assign a contact to a keyword/best-bet and set a review date, my understanding is that SharePoint will automatically send that contact an email alert when that time comes. However in my testing just using my account as the contact I have never received one of these. Am I doing something wrong?

2) What permissions would a user need to edit/update a keyword they are the assigned contact for? I assume too that if they have the rights to update one of them then they can probably unfortuantely update them all, correct?

    1. I cannot get an alert sent on my systems - not really proof of anything though.

    2. The keywords are stored in the site settings, so site collection administrator is going to required. They will also all be able to edit them.

    However, you could easily create a list that allows people to add "keywords" and permission it so they can only edit thier own. Add an event handler to modify the actual keywords when an item is added or modified.

    Ryan : Nat, note that I discovered that keywords are not stored in the site collection but rather the SSP. You can see this by taking a STSADM backup of the site collection and then restoring it on another place, for which you'll see the keyword collection will be empty.
    From Nat
  • I spoke with Microsoft about these questions and learned that there is no automated alert or built-in workflow that triggers when a keyword hits its review date. Also only a site collection administrator can view and update a keyword. If you want this type of functionality your only choice is to roll your own solution.

    From Ryan

best connection suggestions or best practices for multi site data pull, expert advice needed

i will be having a NAS box that will be receiving data from 20 sites in 20 cities. the computers will be sending about 500MB a night from each site. What are the requirements for equipment hardware that is needed to achieve this?

thanks gd

  • A DSL connection should be enough; if we assume that a "night" is 8 hours, the average bitrate required to transfer 10GB in 8 hours is about 3.5Mbps, which any decent DSL connection should be able to sustain in the downstream direction (which is what you're going with). You might need a special data plan from your DSL provider, since 300GB of traffic is a fair bit (at least in my part of the world) and you really don't want to get shaped.

    From womble
  • Some allowances for failures both on the sending and receiving side should be considered and the assumption should be that the transfer will be running 24x7. One missed "night" and the data has doubled for the next night, a missed weekend and there is 30GB + 10GB for Monday night. It's easy to get behind and have a lot of trouble catching up. This may mean you'll more or dedicated bandwidth at the receive site.

    On the client side you'll need to calculate upload times, if using asymmetric DSL then consider what happens if a client skips a few days and has to catch up. The upload will probably need to be throttled or some type of QOS implemented to give your "normal" traffic priority during working hours.

    The hardware really depends on how long you need to keep the data stored at each location, which will allow you to calculate storage capacities needed. Most current firewalls will allow for QOS, most consumer grade routers/modems will not.

    If the data will be all new each day then you'll need to copy the entire set each night. However if the data will be a change from the previous day then consider software that will let you copy just the changes/delta. The delta can be either block or file level depnding on the systems involved. In Linux RSync will do block level diffs. In Windows Server 2008 there is Remote Differential Compression.

    From Ed Fries

Sharepoint error 403 access forbidden

I am new to sharepoint and after creating a new web application this is what i'm getting -

The website declined to show this webpage HTTP 403
Most likely causes: •This website requires you to log in.

This error (HTTP 403 Forbidden) means that Internet Explorer was able to connect to the website, but it does not have permission to view the webpage.

I have Anonymous Authorization enabled in IIS and in sharepoint central admin. Does anyone know how to make this work?

    1. Check the NTFS security permissions on that object (folder/file)
    2. Run "Filemon" (http://technet.microsoft.com/en-us/sysinternals/bb896642.aspx) on the sharepoint server to find out which AD Object (user/group) are having issues accessing/serving the file.
    From Home Boy
  • I have seen this when the account that acts as the identity for the app pool connected to your SharePoint site is not in the correct groups. Figure out which user is the app pool identity, then add it to the following groups in Computer Management:

    • WSS_ADMIN_WPG
    • WSS_WPG
    • Administrators

how to have publishing, blog and wiki features together?

Hello everyone,

I am using SharePoint 2007 Enterprise + Publishing portal template + Windows Server 2008. I want to have blog and wiki features as well as publishing portal features. Any ideas how to integrate publishing portal, blog and wiki? For integrate, I mean using the same user name and password to pass through authentication of publishing portal, blog and wiki. And should I setup 3 different site collections for publishing portal, blog and wiki (I find if I setup publishing portal site collection, I can not create blog and wiki sub-site)?

thanks in advance, George

  • There should be no problems at all with this. I just checked my SharePoint server with the publishing portal as the root of the site collection, and can create blog and wiki subsites with no problem. No need to have separate site collections. Where are you running into problems?

    From Sean Earp

iptables -L pretty slow. Is this normal?

Hi,

Quick question but Gooling has not revealed an answer. When I do iptables -L, it seems to lag on displaying items in where I have limited the source to internal ips 192.168.0.0/24

The whole listing takes about 30 seconds to display.

I just want to know: Does this affect the speed of my incoming connections or is this simply a side effect of having all these ranges within my iptables rules?

Thanks!

  • Include the -n option so it doesn't try to use DNS to resolve names for every ip address, network and port. Then it will be fast.

    Kyle Brandt : I generally like `iptables -vnL --line-numbers` for my listing command. Keep in mind by default you don't see all the tables, for instance, the nat table. To see that nat table: `-t nat`
    Bartek : Thanks, that makes sense. :)
    From Zoredache

How to migrate a bare metal Linux installation to a virtual machine

I'd like to migrate a RHEL5 installation from a bare-metal installation to a virtual machine. I'm not very experienced when it comes to Linux backup and restore procedures, so I'm looking for advice on the best way to accomplish this. The requirements are

  • must be able to reduce the size of the disk (physical disk is over 200gb, mostly empty space, so the VM should be able to be made smaller)
  • there is an Oracle installation on the machine which must come along for the ride (if there's a way to stop writes going to the disk while backing it up, that'd be ideal)
  • I can install the OS on the destination VM before restoring to it, if required
  • this is not a production system, so I'm not worried about uptime or performance
  • everything needs to be moved (installed software, users/groups, /etc/* configuration, etc.)
  • the disk being backed up is the primary disk, but there is a secondary disk which can be used for storing data before moving it to the VM.

I assume that needing to reduce the disk space rules out using dd. Would tar work for my requirements? Is there some way of taking the file system offline so applications can't write while I'm backing it up? Can Oracle be backed up using tar if it is stopped at the time, or do I need to move it separately from the rest of the system, using its built in tools?

  • You're missing an important piece of information here - which virtualisation hypervisor?

    If it's VMWare there are both free, limited and paid, more-powerful, P2V convertors available that can create VMs native to VMWare or to the .OVA open virtual machine format.

    Others will know the P2V conversion options for Hyper-V/KVM/Xen etc. better than I.

    Bill : I knew I was leaving something out. I would prefer VMWare, but really it doesn't matter. Xen would be fine as well.
    Bill : This is definitely a case where knowing the proper terminology ("P2V") greatly helps in searching google. It looks like VMWare vCenter Converter is what I want to use, although with Linux, it can unfortunately only convert to VMWare Infrastructure, not to VMWare Server. Although I haven't had the time to get this working yet, I'm marking this as the accepted answer.
    From Chopper3
  • There's an unsupported script out that's published on a Red Hat URL. It basically is an ISO that boots from CD, sucks your network config from the HDD to get online, and then SCPs your filesystem to an awaiting host. It also sends a Xen config.

    Works great. Once those files get transferred, you can fire them right up on your Xen server.

    In a way this is safe, because you're doing read-only ops on your physical machine. But, if it's a production machine, the usual disclaimers apply. The only trouble I ran into was that I had to fiddle with the kernels so that I had *xen kernels on the new virtual machine instead of non-xen. That caused a bit of unexpected downtime but I wasn't working on a critical machine either.

    This is definitely experimental, but it worked for me. If you have trouble, you can always fire the physical back up immediately.

    PS: Be familiar with kpartx ahead of time, in case you need to get inside your disk images when they're not running.

    http://people.redhat.com/~rjones/virt-p2v

    From pboin

FTP timing out after login

For some reasons I cant access any of my accounts on my dedicated server via FTP. It simply times out when it tried to display the directories.

Heres a log from FileZila...

Status: Resolving address of testdomain.com
Status: Connecting to 64.237.58.43:21...
Status: Connection established, waiting for welcome message...
Response:   220---------- Welcome to Pure-FTPd [TLS] ----------
Response:   220-You are user number 3 of 50 allowed.
Response:   220-Local time is now 19:39. Server port: 21.
Response:   220-This is a private system - No anonymous login
Response:   220-IPv6 connections are also welcome on this server.
Response:   220 You will be disconnected after 15 minutes of inactivity.
Command:    USER testaccount
Response:   331 User testaccount OK. Password required
Command:    PASS ********
Response:   230-User testaccount has group access to:  testaccount
Response:   230 OK. Current restricted directory is /
Command:    SYST
Response:   215 UNIX Type: L8
Command:    FEAT
Response:   211-Extensions supported:
Response:    EPRT
Response:    IDLE
Response:    MDTM
Response:    SIZE
Response:    REST STREAM
Response:    MLST type*;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*;
Response:    MLSD
Response:    ESTP
Response:    PASV
Response:    EPSV
Response:    SPSV
Response:    ESTA
Response:    AUTH TLS
Response:    PBSZ
Response:    PROT
Response:   211 End.
Status: Connected
Status: Retrieving directory listing...
Command:    PWD
Response:   257 "/" is your current location
Command:    TYPE I
Response:   200 TYPE is now 8-bit binary
Command:    PASV
Response:   227 Entering Passive Mode (64,237,58,43,145,153)
Command:    MLSD
Response:   150 Accepted data connection
Response:   226-ASCII
Response:   226-Options: -a -l 
Response:   226 18 matches total
Error:  Connection timed out
Error:  Failed to retrieve directory listing

I have restarted the FTP service serveral times but still It doesnt loads. I only have this problem when my server is reaching it peak usage which is still only 1.0 (4 cores), 40% of 4GB ram.

The ftp connections isnt maxed out because only me and my colleague have access to FTP on the server.

  • You state that this usually works? Can we get a log output of what it looks like when it is functioning correctly?

    How many files are in that directory? Anything over like 10k will bog down the server, and cause timeouts when trying to read the whole list.

    My third guess is that maybe passive mode communications aren't traversing your firewalls correctly. I wouldn't set it to active mode until the other questions are answered (if it usually works, changing settings just adds more variables).

    Brent : I had a similar issue, which I recall related to using PASV mode - but I don't recall the details - I mention it to validate your third point.
  • What do the logs look like on the server at this time?

    Things to try:

    • PASV mode
    • SFTP
    • a different FTP client
    From briealeida
  • I've never seen a system do this myself, but from the log and your other comments it simply looks like your firewall can't handle the second TCP socket used (for data transfer) when it's heavily loaded.

    How many other concurrent TCP sockets are open when this happens?

    FWIW, I'd try active mode instead of passive - it'll cost almost nothing to try it.

    Alnitak : If it's Unix (which your use of Pure-FTP suggests is the case) then you can check the server itself with `netstat -an`.
    From Alnitak
  • I had a similar issue...
    Where my FTP connection inexplicably went into passive mode despite my explicit "active" mode setting.
    I restarted my cable modem and router and fixed it.

    From NEPatriot

How do I install Migration Manager with Plesk already installed?

I'm running Plesk 9.2 on RHEL 5.

I just installed it only to realize that 9.2 supports, but doesn't include Migration Manager by default.

What's the easiest way to install Migration Manager without having to redo the entire install?

  • In other Plesk versions you could add modules post-install from the software updater.

    abrahamvegh : Thanks, that was it!
  • Plesk's Migration Manager can be easily installed with /usr/local/psa/admin/bin/autoinstaller. Just start it up, go to the Components menu, select PMM (Plesk Migration Manager), and click through the installer.

  • Thank you very much! it worked great!

    From Paolo

Search server (sharepoint) permissions

I have created an ASP.NET web application that is calling the search.asmx service on a Search Server 2008 Express instance.

When i connect using a domain account that is a local admin on the search box everything works great, however this isn't a long term solution... What permissions to i need to give to the domain account for my web service call to work?

The error i receive when connecting with non-domain admin account is;

Attempted to perform an unauthorized operation

Thanks in advance.

Western digital caviar green and raid

I would like to use the low cost Caviar Green (Western Digital WD20EADS 2TB or WD10EADS 1TB) hard drives in a raid hardware for a not mission critical purpose: is anyone aware about problems due to the rotation scaling (5400 - 7200) typical for these hard drives?

thank you

  • I've been running the 4x WD10EADS on 3Ware 9650se SATAII RAID without any issues. While I'm aware of the scaling issue, I don't think it's a problem for hardware RAID controllers in general.

    From osij2is
  • The only problem we've had with them at work (we don't have many, but a few) is that they're pretty darned slow. If you need a lot of storage, but not a lot of IOPS, then they seem to be a pretty decent way to go.

    osij2is : +1 for IOPS. While I don't experience any slow access times on my array, I wouldn't be surprised if it were to become an issue.
    From womble
  • I've ran the greenpower drives in a RAID configuration for a couple of years now in three different setups: connected to motherboard for software RAID, connected to a RAID card, inside of a Drobo. Performance was always the best on the RAID card because it never let the drives go to sleep, so there wasn't a delay while the drives woke up. However, because of this it always meant that power savings came only from lower rotational speeds and not their ability to sleep. Directly to the motherboard worked pretty well, but the spin up times were annoying. I was using the drives to record hdtv and I'd tend to lose the first few seconds while the drives woke up. Finally, I got a Drobo and put the drives in there. Spin up times are really annoying now, but I've stopped using them for primary storage. I just migrate data there as long term storage.

    So, no problems with rotational speeds. It seemed that my rather cheap (~$200) RAID card handled it with no problem. But just be aware that you're not going to get the same power savings you'd get on a desktop machine if you've get them connected to a dedicated card.

    From Pridkett

Error 207 - invalid column name 'msrepl_tran_version' with Sql Server Replication

I'm setting up transactional replication with updatable subscriptions on Sql Server 2005. I setup the database with a backup, and haven't changed the schema or even changed the data since making the backup. I'm getting the following error in my job history:

Error 207: invalid column name 'msrepl_tran_version'

What is causing the problem?

  • Found the answer, but not via Google directly. Transactional publication adds a column named "msrepl_tran_version" to each table in the database. I took the backup before setting up the replication, so the local copy did not have the msrepl_tran_version column. Restoring a more recent backup solved the problem.

    From ristonj

Descriptive hostnames for server farm

Can someone give some insight to 4 fully qualified hostnames I can use for our new server setup. Assume the name of the website is helloworld.com We have the following:

  • A Webserver
  • A Failover Server
  • A Storage Server
  • A Database Server
    • webserver: www.helloworld.com perhaprs?
    • failover: Failover for what, www? Then maybe www2.helloworld.com
    • Storage: storage.helloworld.com
    • database: db.helloworld.com
    Jonathan Kushner : Do I need the www for a hostname?
    Bart Silverstrim : Are you talking about names in DNS for the external network, or internal host names? You can add lots of "names" in DNS to refer to a server but have one name for the actual machine...
  • What is your expansion plan for the future?

    A generic set up that allows for scalability (more machines in the future) would look like this:

    web1.helloworld.com
    standby1.helloworld.com
    files1.helloworld.com
    db1.helloworld.com

    If the failover is for the web server, you may want to try: web2.helloworld.com or backup2.helloworld.com or something like that.

    (You may start things with 1 or 0.)

    Kyle Brandt : This is smarter, starting at 1 for everything :-)
    Dennis Williamson : You may not need it right away, but I'd pad those numbers with one or more leading zeros.
    briealeida : Dennis, you're so right. I often forget to do that with other stuff and then have to write a script to fix it. Thanks. :-).
    From briealeida

IBM DS3200: some general questions

I'm thinking to buy an IBM DS3200 p/n 1726-21X with sata hard drives attached on a IBM sas hba controller and to configure it as "single server, single path" like IBM says in its configuration examples.

Unfortunately I never had one so I would like to know some details from someone who already has one.

  1. I have some empty hard disk enclosures (p/n 42R4129 the ones used by DS3200): if I fill them with some hdd from other brands, let's say Western Digital, would they be recognized or are there some bios check on serial number?
  2. Does ds3200 support 2TB hdd? IBM says 1TB but I think that's because they do not provide 2TB hdd yet.
  3. How many hard drives can be used? any number between 1 and 12 included?
  4. Do you have some benchmarks on write and read speed?
  5. How does it handle disks? I mean, ds3200 is attached to a sata multilane port and it can be filled with 12 hdd. How are they seen by the controller? like 12 distinct drives or like 4 groups of 3 hdd?
  6. Do you have any screenshot of the administration interface?
  7. Which device handles the raid? The sas hba controller or the DS3200 itself?

thank you!

    1. I'm pretty sure there are BIOS checks on the drives, there are on other, older IBM disk arrays

    2 They don't supply 2TB disks for it yet, but they likely will do and it will almost certainly support them

    3 Yes, you can put in as many or as few disks, though obviously less than 5-6 it doesn't make much sense to be using it

    4 The DS3400 is the identical array with a fibre channel interface, it's benchmarked using SPC-1 and SPC-2 here, the performance will be very similar

    5 The server's SAS card sees the RAID controller in the DS3200 rather than the raw disks. The presentation is controlled via the IBM Storage Manager client software which connects over TCP/IP. You can build multiple RAID arrays on the DS3200, which would each appear as single disks to the server.

    6 The full configuration guide is available as an IBM Redbook and includes multiple screenshots

    7 The DS3200 internal controller handles the RAID

    The IBM DS3000 series arrays are pretty good at what they do, they're also pretty dumb compared to most other arrays out there, but they are cheap. It's based on an LSI model, Dell sell an essentially identical MD3000 disk array.

    Hope that all helps

    dam : thank you very much!
    From Ewan Leith
  • Hi Dam Did You already test feature from point 1. ? It is really true than DS3200 not support SATA drives from other vendors ?

    I ve found some informations about errors with dual controller DS3200 depending on single-port SATA - but logically its should be only affected for dualcontroller DSs. Here is document.

    If you already test other vendors drives please provide some detailed information about vendor, drive type, firmware version etc. Thx in advance.

    dam : We didn't buy the DS so I can't answer to your question but, during my researches, I found this document (https://www.ibm.com/developerworks/forums/thread.jspa?threadID=252649) which says that other brands are supported. The post seems to be reliable because I tested OEM Seagate ST31500341AS on 8S and they failed exactly as it is written there (unbelievable!! it's due to the firmware: no disk larger then 1TB is supported by 8S)

IP Forwarding and traffic shaping

Is there any way to forward packets from network A to network B (just like a router) without changing source IP Address (and vice versa, from network B to network A) and also enforcing traffic shaping rules?

The solution should be implemented in FreeBSD.

I googled about traffic shaping in FreeBSD and found ALTQ but i am not sure whether it is possible to forwards packets with ALTQ transparently or not.

If it's possible then it's likely that i could setup a network with a Squid server (for caching and more imporant logging user's download/upload) and ALTQ (or something else) to manage their bandwidth. So my network architecture will be:

Internet <==> SquidServer <==> TrafficshapingServer <==> LocalNetwork

But if TrafficShaping replaces SourceIP of packets with his IP Address, logs of Squid becomes useless. because Squid didn't know which packet is from which IP Address (all Squid see is TrafficShaping IPAddress)

  • Sure, routers don't ordinarily change source/destination IP addresses; only NATting routers do that. So, just don't use NAT, and all will work fine.

    Isaac : So it is possible `route`ing without `NAT`ing in ALTQ?
    womble : I'd be surprised if it couldn't be.
    From womble
  • I'm not familiar with it, but I'd be wiling to bet that the TrafficShaper you're referring to is NAT'ing primarily to guarantee that traffic returns via the correct path. Otherwise you're likely to end up with an asynchronous routing problem, which will cause failure.

    One thing you might want to look into is inserting an X-Forwarded-For header at the squid server. This is actually a little more reliable method of tracking source in a proxy'ing environment as it inserts the data into the actual data instead of relying on IP header information.

How much space should you leave free on a hard disk?

Is there a rule of thumb for how much space to leave free on a hard disk? I used to hear you should leave at least 5% free to avoid fragmentation.

[I know the answer depends on usage (eg: video files vs text), size of disk, RAID level, disk format, disk size - but as it's impractical to ask 100 variations of the same question, any information is wlecome]

  • I would say typically 15%, however with how large hard drives are now adays, as long as you have enough for your temp files and swap file, technically you are safe. But as a safe practice, once you hit 15%, time to start thinking of doing some major cleanup and/or purchasing a larger/second hard drive.

    From
  • From what I understand, it also depends on the file system on the drive. Some file systems are more resilient to things like disk fragmentation.

    username : +1 oops. i'm amending my question to mentione volume format
    From Psycho Bob
  • You generally want to leave about 10% free to avoid fragmentation, but there is a catch. Linux, by default, will reserve 5% of the disk for the root user. When you use 'df', the output doesn't include that 5% if you run it as a non-root user. Just something to keep in mind when doing your calulations.

    Incidentally, you can change the root reserve by using tune2fs. For example

    tune2fs -m 2 /dev/hda1
    

    will set the root reserve at 2%. Generally this is not recommended of course, unless you have a very specific purpose in mind.

    nedm : Good tip -- I do this for our terabyte drives and knock it down to 1% -- that's still 10GB reserved for root, and I've never run into any problems with it.
    James : that's ext3-specific, not all Linux filesystems do that.
    From jedberg
  • I would recommend 10% plus on Windows because defrag won't run if there is not about that much free on the drive when you run it. Leaving free space however will not necessarily stop fragmentation from occuring. As you already mentioned it is dependent on the usage. Fragmentation is caused more based on the amount of variance to the data on the drive and the size of the files being written and removed. The more the data changes and the more random the file sizes the more chance you have of fragmentation occurring.

    The only real way to minimise fragmentation would be to defragment the drive on a regular basis manually or use a tool like Diskeeper which runs in the background on Windows cleaning up when the machine is idle. There are filesystems where fragmentation is handled by the OS in the background so manually running a defragmentation is not necessary.

  • Yes, depends on usage and underlying storage system. Some systems, like high end SAN based disk arrays laugh at file fragmentation making the only system impact of fragmentation is OS overhead in scattering things all over hither and yon. Other systems, like laptop, drives are another story all together. And that doesn't get into newer file systems, such as ZFS, where the concept of a hard limit to space is nebulous at best.

    NTFS is its own beast, of course. These days I give C:\ a total size of 15GB for XP systems, and haven't played with Vista/Win7 enough to know what to recommend there. You really don't want to get much below a GB free on C:. Using Shadow Copies means you should keep more 'empty' space around than you otherwise would, and I'd say 20% free-space is the marker for when more needs to be added or a clean-up needs to happen.

    For plain old NTFS data volumes, I get worried when it gets under 15%. However, if the drive is a 1TB drive, 15% is still a LOT of space to work with and allocate new files into (the converse being that it takes a lot longer to defrag).

  • I try to keep the used space under 80%. Above this number the filesystem generally has to work harder to place data on the disk, leading to fragmentation.

  • SSDs add a new layer to this. Wear Leveling and Write Amplification. For these reasons you want more free space than you absolutely need on traditional hard drives.

    Short Stroking a traditional hard drive reduces latency for random reads/writes. "Short Stroking" a SSD gives the drive controller more unused blocks for garbage collection/wear leveling routines so it won't speed up the drive but it will increase longevity and prevent the speed loss that is seen when a SSD fills up.

    You still don't want to fill the drive but with SSDs the immediate effect isn't there and the reason why is different.

    From pplrppl
  • I'd always try and keep around 50% free on system volumes of any kind, and possibly smaller data volumes. Sizing 2 for 1 if possible and I'd set a warning threshold at 75% or something.

    But for data storage however it's only a matter of data growth rate which needs to be monitored and/or estimated when setting up the monitoring... if the data doesn't grow very fast on for example a 1TB volume - a few % for the warning threshold would be fine and I'd be comfortable with 90-95% utilization. If the growth rate is higher, adjust it down to get notified in time... fragmentation can often be dealt with if the data isn't growing and just changing with scheduled defrags.

  • I try to leave 50% of it free for a couple of reasons:

    1. I'd heard that - despite the page file's relatively small size - that leaving that much room can speed things up. I believe it was from the very helpful book "PC Hacks" published a few years ago.

    2. No matter how large the drive, having the goal of only filling it halfway makes you mindful of what's on there, and - for me - it means I'm better about archiving or moving stuff to a larger external.