Friday, January 28, 2011

Does the nginx “upstream” directive have a port setting?

moved from:http://stackoverflow.com/questions/3748517/does-nginx-upstream-has-a-port-setting

I use upstream and proxy for load balancing.

The directive proxy_pass http://upstream_name uses the default port, which is 80.

However, if the upstream server does not listen on this port, then the request fails.

How do I specify an alternate port?

my configuration:

http{
#...
upstream myups{
 server 192.168.1.100:6666;
server 192.168.1.101:9999;
}
#....
server{
listen 81;
#.....
location ~ /myapp {
 proxy_pass http://myups:81/;
}
}

nginx -t:

[warn]: upstream "myups" may not have port 81 in /opt/nginx/conf/nginx.conf:78.
  • You should set the port only in "server" statements inside "upstream" definition.

    (Which port does it listen on? 6666, 9999 or 81?)

  • I think you are misinterpreting the meaning of the line:

    proxy_pass http://myups;

    This line is telling nginx to pass the request to one of the servers listed in the 'upstream myups' block. It is not going back out on to the internet to send a request to URL for the proxy_pass.

    In other words, when a request come in to the nginx server on port 81 for the hostname you specified, it will pass the request on to either 192.168.1.100:6666, or 192.168.1.101:9999.

    Hope this clears it up a bit.

    From jammur

Network issues with DNS not being found

Hi there

This is exactly like how our network looks like:
Single server with a network router

Everything is setup, but I cannot connect our Macs under the Login Options -> Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server?

Our Macs keep giving the following error while trying to join the Network Account Server:

Unable to add server
Could not resolve the address (2200)

What am I doing wrong?

  • According to your error message it is clearly a DNS issue.

    Some hints: Is the correct DNS passed to the clients with DHCP? What do the client DNS/network settings look like? Are they using the correct DNS? What does it say if you try to resolve the address with a "dig"? Does it work if you try joining the clients to the ip (for testing)?

    From Gomibushi
  • Check out the screenshots below. In the first, note the red highlighted area. Try that address. Also try the kerberos realm. Can you ping the server from the clients using the IP address and Toolbox.local? Don't append afp:// or anything to the url. You should set up a static IP for the server and set up the router to provide a PTR (reverse dns name) for the server. Send me a private message with your email address and I can help you out in more detail if you'd like.

    alt text alt text

Does Identity Management for Unix modify the AD schema?

We have a forest whose schema master is a 2008R2 DC (AD schema version if 47). I'd like to install Identity Management for Unix, but it's unclear to me whether or not this updates the AD schema. The server I plan to run IDMU and its NIS Server on is a 2003 R2 SP2 DC. The somewhat fuzzy impression I got from reading technet, etc was that it still makes some schema updates even if your DC is 2003R2+. Do I need to do this installation on my schema master first? We don't have the need to run this IDMU/NIS Server stuff on any of our other DC's.

How do you passthrough native SATA drives to a guest on ESXi?

I have ESXi 4.0 running on an Intel DX58S0 Mothboardboard with an Intel Core i7 930 processor. VT-d is also enabled.

I have three drives in the system, drive 0 is used for ESXi. Drive 1 and 2 contain data from an older machine and show up under the "Storage Adapters" section in configuration.

I would like to allow a guest machine to access the data on these drives (as nativly as possible). I have enabled passthrough of the motherboard's built in SATA controller (Intel/Marvell 88SE6121 ). This controller shows up in my guest OS, but the guest shows no drives aside from the normal virtual drive. I have tried a Linux guest and Windows7. I have also configured the host machine to try IDE/RAID/ACHI modes for the SATA controller.

Any ideas how I can configure one of my guests to get at the raw data on these drives?

  • I had a similar issue with some drives from a server that had failed, I found the answer on this page.http://www.vm-help.com/esx40i/SATA_RDMs.php

    It's far easier than controller pass-through or any of the other tricks I'd thought of, but you do need to be able to use the Service Console (google esxi unsupported mode ssh)

    summary:

    Step 1) fdisk -l to find the device name

    Step 2) ls /dev/disks -l to find the VML identifier

    Step 3) vmfstools -r VMLid aVMDKName.vmdk -a adaptertype

    Step 4) Add the aVMDKName.VMDK to a virtual machine.

    I wasn't able to boot off the disks as I had hoped to (P2V without copying 500GB across the network), but I was able to attach them to another virtual machine and get at the data.

    From Greg
  • The last answer is true but with some modification in commands

    Step 1) fdisk -l to find the device name

    Step 2) ls /dev/disks -l to find the VML identifier

    Step 3) vmkfstools -r VMLid VMDKName-withFullPath.vmdk(i.e. /vmfs/volumes/disk2/somename.vmdk) -a adaptertype -z /vmfs/devices/disks/vml.0200000000600508b1001037383941424344450d004c4f47494341:8

    Step 4) Add the VMDKName-withFullPath.vmdk to a virtual machine.

  • Another solution would be to perform the following:

    Step 1) Make sure remote tech support (SSH) is enabled and running. Step 2) SSH to the host Step 3) fdisk -l | grep -B4 'doesn't contain a valid partition table'

    Note: This will show you all the physical disks that don't have partitions yet, such as a newly-provisioned SAN LUN. It should look something like this:

    Disk /dev/disks/naa.60060e801004eb90052fab6900000000: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/disks/naa.60060e801004eb90052fab6900000000 doesn't contain a valid partition table

    Disk /dev/disks/naa.60060e801004eb90052fab6900000001: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/disks/naa.60060e801004eb90052fab6900000001 doesn't contain a valid partition table

    Disk /dev/disks/naa.60060e801004eb90052fab6900000002: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/disks/naa.60060e801004eb90052fab6900000002 doesn't contain a valid partition table

    If this command doesn't show you any devices, my procedure probably isn't for you, as I, like the previous posters, make the assumption that the reason your VC "Raw Device Mappings" radio button is greyed-out is because the LUN doesn't have a partition.

    Step 4) Create a new partition, "fdisk /dev/disks/naa.60060e801004eb90052fab6900000000" Note: You'll have to use your own device name here. Step 5) If you're not too familiar with fdisk, you can do this:

    a) "p" to print existing partitions. If you don't see any, then it's probably safe to proceed.

    b) "n" to create a new partition.

    c) "p" for primary

    d) "1" for partition 1

    e) to select default start sector

    f) to select default end sector

    g) "w" to write

    h) "q" to quit

    Step 6) Now you should be able to assign the raw disk in VirtualCenter.

    From DrB

Force CPAN to download via HTTP

I'm about to lose my mind. How in the world do you tell CPAN to download via HTTP only? ...and NOT via a proxy.

  • Try these:

    Before running cpan: export -n http_proxy
    In the cpan shell: o conf http_proxy ''
    TO save your modified cpan config: o conf commit
    

    That will disable any http proxy CPAN is configured to use.

    From Jason
  • If you don't want to use a cpan shell, you can also edit your cpan config file with a text editor, in unix systems it's here:

    ~/.cpan/CPAN/MyConfig.pm

    Of course, the field to change in your particular question is 'http_proxy'.

    From toshiro
  • Try putting only HTTP URLs in your CPAN's Config.pm file, like:

    'urllist' => [q[http://cpan.cict.fr/], q[http://cpan.enstimac.fr/], q[http://mirrors4.kernel.org/cpan/]],
    

    I routinely do this and as far as I can see there's no FTP traffic to any CPAN mirror.

.htaccess redirect doens't work correctly

Hi,

I'm using a .htaccess to get all documents from an old site to the new one. The old site doesn't support PHP or mod_rewrite. I tried the following code:

Redirect 301 / http://www.new.com/archive/

I requested " http://www.new.com/archive/index.html" Which resulted into

http://www.new.com/archive/old.com/olddir/&&&/&&&/users/4/web/00/00/24/04/44/&&&/1/&&&/0/&&&/&&&/&&&/users/4/web/00/00/24/04/44/&&&/1/&&&/0/&&&/index.html

Is this possible to solve?

  • Kevin -

    At first glance, it looks like something within your application is doing quite a few redirects. The simple Redirect statement in your .htaccess shouldn't be creating the very long URL that you pasted above.

    It may help you debug the issue if you use curl to test it:

    $ curl -I olddomain.com | grep ^Location
    Location: http://newdomain.com/archive/olddomain.com/
    

    From there, just curl the URL that is returned and see where you're redirected then. Take the next URL and curl it as well. Keep going until you are able to debug the source of those redirects.

Using dot (.) as delimiter to specify group in chown

I've always done:

chown nimmylebby:admins file

I see that this also works:

chown nimmylebby.admins file

Might seem like a silly question but I'm genuinely curious on how the latter works. It isn't documented in my chown's manpage (GNU coreutils 8.4, 10/10). Is this perhaps a Bash interpretation? Or a deprecated format for the argument?

  • Hi,

    from the chown(8) manpage on macos x 10.6.4.

    COMPATIBILITY
         Previous versions of the chown utility used the dot (``.'') character to distin-
         guish the group name.  This has been changed to be a colon (``:'') character, so
         that user and group names may contain the dot character.
    

    Good question, I learned something today ;)

    From nayden
  • From info coreutils 'chown invocation' for GNU coreutils:

    Some older scripts may still use '.' in place of the ':' separator. POSIX 1003.1-2001 (*note Standards conformance::) does not require support for that, but for backward compatibility GNU 'chown' supports '.' so long as no ambiguity results. New scripts should avoid the use of '.' because it is not portable, and because it has undesirable results if the entire OWNER'.'GROUP happens to identify a user whose name contains '.'.

Which open source LDAP directory server should I use?

This is an internal directory for a small (60 employees) company. I'm stuck between OpenDS and ApacheDS. Any recommendations? I'm pretty worried that oracle will kill off OpenDS development.

  • OpenLDAP that even comes with an O'Reilly Book on it.

    From adamo
  • If you need Windows-interoperability then maybe 389DS has some useful features.

    From ptman
  • Keep using OpenDS. It's the best, the easiest to use. Should you be worry about the fate of the project, be aware that ForgeRock has stepped in, offering support for OpenDS through the OpenDJ product and project. OpenDJ is a downstream project of OpenDS, developed fully in open source. Check on ForgeRock website, for downloads, source code and more...

    From Ludo
  • Do'nt worry. Oracle may Kill -9 OpenDS, ApacheDS will still be around ;)

    And it's really open source, so what else ?

Backing up permissions on Linux

Due to an intermittent internet connecton and some fat fingered typing, I came very close to sending the command chown -R me.group / to my server, which I think would be fairly disruptive.

Is there a way to backup just the permissions on all the files on the system?

  • You can run ls -lR / > permissions_backup to create a file containing all permissions, but this would be quite hard to restore. You could of course quickly write a script to do it.

    From fahadsadah
  • There's no specific command to back up file permissions. As the previous poster mentions, you could craft a recursive find or ls -lR to a file, and then compare that file with the permissions on any particular file you're interested in.

    Alternatively, there are packages for intrusion detection which monitor file sizes and permissions that this is probably overkill for your scenario.

    Tid.

    From
  • To back up all permissions on the system:

    getfacl -R / > acl_backup
    

    To restore:

    setfacl --restore=acl_backup
    

    Of course, check out the manpages but it's a pretty straight forward command that many people are unaware of:

    man getfacl
    man setfacl
    
    dunxd : Nice - backup of acl is 18Mb on my system. Hope I never need to restore it. Thanks!
    Nimmy Lebby : Glad it worked! :-)
  • If you use a Linux distribution what uses packages (basically anything except Slackware), the permissions are probably stored in the RPM/Deb/whatever database. For RPMs, check out the --setperms and --setugids options.

    From DAM

Locale setting on a Red Hat box

Hi all,

Recently our organization got a couple of server boxes which are I guess present in some data-center in UK. The problem is that for some reason the default Locale representation in Java on that server returns en_US instead of the expected en_GB (I confirmed this by running a code on that server which simply outputs Locale.default()). I am pretty sure this has got something to do the way in which the boxes were set up.

My question is: what would be the approach to fix this issue now that the OS has been installed? Is there any way I can for a given SSH session set the locale as en_GB instead of the current en_US?

TIA,
sasuke

How to convert php5 cgi to module?

Hi, I installed PHP5 on my debian lenny system as cgi. Now I found that flush() in php is only working for php installed as module. Anyone know how to (re)install php5 as module? (apache2)

Thanks :)

  • Just copy and paste the following.

    sudo -i
    aptitude -y remove php5-cgi
    aptitude -y install libapache2-mod-php5
    /etc/init.d/apache2 restart
    

    and it should be done. You will be asked for a password.

    From fahadsadah

Redmine gui rendering issue

Hi,

we are using this redmine instance for some time now in the intranet, but from one day to another nearly all forms in redmine look like this. It is the same in firefox, chrome and safari. I also opened a different redmine instance from another server in the same chrome browser, which looks fine.

alt text

Not affected is the login form, the search boxes and the filter boxes. Also the wiki works fine.

I cannot remember having changed a setting that could have done this. I also tried changing the skin from standard to classic or alternate, which did not help.

Version info says Redmine 1.0.2.stable (PostgreSQL) Server is ubuntu 10.04 64, Client is win3k-32.

The last thing I did was adding a new project.

Update:

The site is reverse proxied over a https apache2 in our intranet. I just found out that served directly from the original machine (with mongrel on http port 9001) everything is fine, so I guess apache filters something. Any ideas?

Maybe a resource like a css link is not properly rewritten?

This is the vhost file from the proxy:

<VirtualHost 10.1.1.186:80>

    ServerName redmine.cgnch.de

    ErrorLog /var/www/redmine_http_error_log
    CustomLog /var/www/redmine_http_access_log combined

    #Re-write any HTTP request to HTTPS
    RewriteEngine On
    RewriteCond %{SERVER_PORT} !^443
    RewriteRule ^(.*)$ https://%{SERVER_NAME}$1 [L,R=permanent]
    #RewriteRule ^(.*)$ https://%{SERVER_NAME}$1 [L]

</VirtualHost>

<VirtualHost 10.1.1.186:443>

    ServerName redmine.example.com

    ErrorLog /var/www/redmine_ssl_error_log
    CustomLog /var/www/redmine_ssl_access_log combined

    #Configure Reverse Proxy
    ProxyRequests Off
    ProxyPreserveHost On

    #Rewrite Engine for URLs in HTML, JS and css:
    SetOutputFilter proxy-html
    # ProxyHTMLEnable On
    # On: rewrite also css and javascript - Off: only in HTML
    ProxyHTMLExtended Off

    <Location />
        ProxyPass http://10.1.1.185:9001/
        ProxyPassReverse http://10.1.1.185:9001/


        Order allow,deny
        Allow from all
    </Location>

    ProxyHTMLURLMap http://10.1.1.185:9001 https://redmine.example.com

    SSLEngine On
    SSLProxyEngine On
    SSLProxyProtocol all -SSLv2

    SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
    SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

</Virtualhost>
  • Check your browser's error console. If it's something like an inaccessible stylesheet, an appropriate error message will be shown.

    mit : You mean the rails / mongrel logs? Where can I find them?
    fahadsadah : I actually meant the browser logs - what web browser are you using?
    fahadsadah : Nevermind, I see you've sorted it.
    From fahadsadah
  • I removed the following lines form the vhost file:

    SetOutputFilter proxy-html
    ProxyHTMLExtended Off
    ProxyHTMLURLMap http://10.1.1.185:9001 https://redmine.example.com
    

    It is not necessary to remap the hostnames, because redmine only uses relative addressing.

    Everything works fine now.

    From mit

Symbolic links not working in MySQL.

Hello,

I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1.

When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine'

This is how it looks :

test-lan:/var/lib/mysql/test3# ls -alh
drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 .
drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 ..
lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm
lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD
lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI
-rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt 

I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine )

Thanks,

  • I think Error 140 is a permissions issue - what are the permissions inside /var/lib/mysql/test? They also need to be mysql:mysql.

    You've also got a problem here that the database is eventually going to have locking issues - what database engine are you using for these tables?

    Eno : All belong to mysql:mysql, I'm using MyIsam engine and I'm sure permisions are fine :/
  • Is it possible to split this shared table into a third database, and give both programs access to it? This will probably be your least-maintenance option in the long run.

  • Why do you need a symlink? Why not just grant which ever user the privileges to read that table from the second DB and explicitly call query that db/table combo, like so:

    $ mysql db1
    mysql> SELECT * FROM db2.table;
    

    Symlinking like that is definitely a non-supported way to share a table and will definitely lead to issues down the line. MySQL is expecting that a DB will own all the tables, so symlinking may cause corrupted data.

    From vmfarms
  • Don't use a symbolic link; use a hard link. It looks just like a regular file to the OS and therefore to MySQL as well. It's basically a file with two entries in the filesystem instead of the usual one.

    ln old_file new_file
    
    From Kevin M

Problem with virtual host on zend server (CE)

Hello,

I am trying to install a virtual host on zend server ce but get the following error:

Forbidden

You don't have permission to access / on this server.

This is how my virtual host configuration looks like:

<VirtualHost *:80>
 ServerName stage5local
 DocumentRoot K:/stage5/public_html
 <Directory K:/stage5/public_html>
  DirectoryIndex index.php
  AllowOverride All
  Order allow,deny
  Allow from all
 </Directory>
</VirtualHost>

I uncommented this line in my httpd.conf:

Include conf/extra/httpd-vhosts.conf

I have added stage5local to my host file. After i did all this I restarted my apache server.

I am running on windows XP.

Any idea's?

Can it be because k:/stage5 is not within my webroot?

  • Can the webserver read K:\stage5\public_html?

    Assuming K is NTFS, you can check by:

    • Disable simple file sharing by clicking here
    • Right click on the folder K:\stage5\public_html
    • Go to the Security tab
    • Give the username ZEND SERVER access to read
    From fahadsadah

Windows 2003 Server freezes

Hello,

We currently have a web server running windows server 2003 with a combination of asp and asp.net sites.

This server has stopping responsing 3 times over the last 3 months, despite having the generation of a kernel memory dump and write an entry in the event logs when an bsod occurs nothing is been logged. When viewing the server locally it is nonresponsive with a blank screen we have to power cycle the server to bring it back up.

After the last time we replaced all of the hardware with the expection of the hard disks, does anyone have any idea what could be causing this or what we should look at?

Thanks

Neil

Edit

One thing I've find in google is a blog posting saying that a fragmented hard disk could be the cause, as the disk is fragmented we will defrag it to see if that helps. Has anyone experienced this before?

  • Have you looked through the event viewer for possible hints?

    Neil : Yes the event log has no warning or error logs for the time of the crash it just stops having entries and then starts again with the unexcepted shutdown entry.
    : What about entries prior to the event/situation. When it does occur, is/are the application(s) still responding?
    From
  • So is a BSOD actually occurring? I'd recommend un-ticking the "automatically restart" option; this gives you a chance to actually see the BSOD. If it is BSOD'ing, you'll be able to see the problem (e.g.: IRQL_NOT_LESS_OR_EQUAL), hopefully the culprit (e.g.: XYZ.SYS) and the STOP code.

    Neil : When the server is accessed locally it is non responsive with a blank screen. The data centre staff have to physically turn it off to bring it back on line.
    TomTom : Either outdated drivers, or broken hardware. That is it - nothing you can do. Had the same with bad network drivers on 2008 when it came out. An update fixed it.
  • If the server has a blank screen and is not responsive then it hasn't crashed, so stop looking for evidence in the event log because there won't be any. There also won't be a memory dump because again, it hasn't crashed, it's become unresponsive. You say that you replaced all of the hardware? I find that a little hard to believe, are you saying that you replaced the motherboard, CPU, memory, hard drives, etc., etc?

    Neil : It was a chasis swap so only the hard disks are the same.
    From joeqwerty

Best package to manage backup / recovery of a web application with database, to a remote server?

We need to manage automated backup and restoration of multiple websites, most built on PHP / MySQL, hosted on different servers across the globe. All the backups will be made to a single remote server. All servers run on Linux.

I was thinking of rdiff-backup with mysqldumps and replication, but we need to get this up and running quickly, so we require an out-of-the-box solution.

Any suggestions?

  • Here is a good one:
    http://www.backup-manager.org/about/

    EDIT (I see no one read the about page):

    Easy and automatic operation

    * 1 configuration file, 5 minutes setup.
    * Manually invoke backup process or run daily unattended via CRON.
    

    Comprehensive Backup

    * Backup files, MySQL databases and Subversion repositories.
    * Specify multiple targets to backup at once (/etc, /home, etc…).
    * Ability to exclude files from backup.
    * Automatically purge old backups.
    

    Backup Methods

    * Full backup only or Full + Incremental backup.
    * Backup to an attached disk, LAN or Internet.
    * Burns backup to CD/DVD with MD5 checksum verification.
    * Archives in lots of open formats: tar, gzip, bzip2, lzma, dar, zip.
    * Slice archives to 2 GB if using dar archives format.
    

    Secure

    * Backup over SSH.
    * Encrypts archives.
    * Offsite remote upload of archives via FTP, SSH, RSYNC or Amazon S3.
    

    Advanced

    * Can run with different configuration files concurrently.
    * Easy external hooks.
    

    Restoration

    * Simply uncompressed the open format backup archives with any command line or GUI tool.
    
    From Paul
  • for the mysqldumps, use automysqldump : it's very configurable. For the code tree, a cron script doing something like the following:

    tar cvf - my_www_dir | ssh user@remotehost "cat > mywwwdump.tar"

    I'm using this method for a few servers I administer and it works well.

    Tid

    From
  • Hi,

    what are your concerns about rdiff-backup? I've been using it successfully to backup web-sites and MySQL databases (database dumps) for the past year and a half with not real issues. The incremental backups work beautifully and the restore process have saved my neck twice.

    Good luck!

    From nayden

use prefork or worker in apache configuration ?

A have apache server version (Apache/2.2.13) and I want to use MPM module But I do not know what use prefork or worker and what configurations I need ?

specification:

Ram:4G

CPU:Intel(R) Xeon(R) CPU E5405 @ 2.00GHz

grep -c processor /proc/cpuinfo 8

max Size Per Apache process:23M

  • If you use only threadsafe modules (e. g. mod_php with some non-threadsafe extensions) you should definitely go for MPM worker, since it is more suited for modern systems (multiple processes, multiple threads).

    From joschi
  • The answer is "it depends" -- things which it could depend upon:

    1. Are you planning on using PHP with your Apache environment? If so then using the worker MPM might not be for you see this link as certain PHP modules aren't thread-safe.

    2. Are you planning on performing operations which might require locking of files? If so then using the worker MPM might not be for you. Threads are not full POSIX processes and therefore don't necessarily obey file locking operations.

      An example of this is Subversion version < 1.5 - these versions made use of apr and apr-util libraries using Berkeley DB for performing commits. Berkeley DB relies upon locking therefore this could result in commits which corrupt your repositories

    The key thing to do is to figure out what you're trying to do with your Apache service - how are you trying to serve things, how are the back-end processes operating to construct the data to be served.

    • Are you just interfacing with a Tomcat service via mod_proxy or AJP? It's usually okay to go with the worker.
    • Are you doing things with Apache modules (SVN, PHP are two examples)? In that case prefork might be safer.
    • Are you using NFS (doesn't always support filesystem locking properly)? Again prefork probably the safer option.

    YMMV - but ultimately you need to understand the architecture underlying things to make a true judgement call on this. If you don't understand the architecture then go with prefork as it's the safer route.

    : web server working with perl website
    From DaveG

File Server - One large LUN or multiple smaller LUNs?

We're about to replace a bunch of kit (servers, SAN etc.) and migrate our servers across from their current platform.

Our file server has around 8tb of data on it right now, spread across 5 LUNs (all on the same SAN mind you).

To my mind the pro's of multiple LUNs mainly come down to "what if" things, such as file restores, impact of file system corruption, and the con's are allocation of space i.e. I might have 500gb free on one LUN but no space free on the LUN where I need it.

How would you do it and why?

Thanks.

  • I dont see a real reason to split it over multiple LUN.

    From TomTom
    1. Check whether there are any limitations on max LUN size on your SAN or at any other point in your stack. For example, if you're running this as a VM within ESX then you have a limit of 2Tb per LUN without hacking around it.

    2. Inspect the payloads. If you're serving several distinct kinds of content from the same server (e.g. Install ISOs, User Shares, Email Archives) you may want some of them to be on faster disks while some sit on cheaper disks. Could this tiering be of value in the future even if it's not required today?

    3. Evaluate the likely future growth of your storage requirements. Is it likely that you'll ever have to increase the available space within one of your storage areas? Can you do this while the LUN stays online? Can you do this while the server stays online? You may find that by splitting your data logically across several LUNs, you can expand one of those storage areas while serving from just 1 server, and without down-time.

    4. Weigh up the probability of ever needing to split one of your storage areas off to a separate server. Performing this operation is simpler if that storage is on a separate LUN, as you can de-provision it to the current server and provision it to the new host.

    5. Are you replicating your storage to another device? Is this on a per-LUN basis? Do you need to replicate all of the data, or only part of it? You may want to split into seperate LUNs to cut your replication traffic down to just the important data.

    Hutch : SAN shouldn't care (P4000), LUNs may be RDM in which case yes 2tb limit applies, or they may be presented within the guest from the MS iSCSI initiator. I think point 2 is very relevant as with a single large LUN it does seem I give myself fewer options.

Autorotating Incremental/incremental backups with cron

Hello,

i need a backup script (or tool) for my Ubuntu Server. Simple packing a folder into a tar.gz cron.d is quite easy to do.

But the problem is, with every update there are several 100 MB of data. So I tought of having a incremental backup, with a daily, weekly and monthly rotation.

More concrete requirement: 1. On Sunday do full backup 2. On Monday, Tuesday, Wednesday, Thurstday, Friday and Saturday do incremental/differential backups only 3. On next sunday do either full backup or just backup the differentiate between this and last week (not sure yet what's better here. Data don't change that often other than the mail folder, latter one would significantly save disk space, but result in more work rolling the data back to a certain point). Rotate the last 4 weeks 4. On every 1st of a month, do a full backup. Keep rotates of the last 3 months

Either one (a script for cron.d or a application of it's own) is welcome. Would be preffered if it can be installed via the OS' package manager without having to compile yourself too much.

The system in question is an Ubuntu 8.04 LTS (newer not available due to virtualization and the virtualisation software being bound to that kernel)

  • You may want to try rsnapshot: http://rsnapshot.org/ It makes use of rsync and hardlinks to achieve system snapshots, which is basically what you need. It also comes in the repositories of Ubuntu.

    From revenant
  • Sounds like rsnapshot will do much of what you want with minimal configuration. It essentially does a full backup every day, but because it stores backups with hard links, and uses rsync to efficiently transfer files, it's pretty efficient both space-wise and network-wise if your files to be backed up aren't changing much.

Taggable alternatives to memcache

I'm looking for a caching solution quite similar to memcache, but the one thing I sorely miss is invalidating content based on a tag. Many preprocessed results depend on multiple sources of data, and a source of data contributes to multiple results. Altering sources of data should cascade into invalidating or those caches.

Of course I could store a tag as a key, with a list of other keys it generated / are dependent on it, but as any speed gain is of the essence, I'd rather not make multiple trips. What are my alternatives for a non-permanent, expiring in-memory data storage that has that capability?

  • The solution is to start each key with a variable number which you can use to invalidate all the associated data.

    For example use xxx_datakey to store all data which needs to be invalidated together. yyy_datakey to store another group of data which needs to be invalidated together

    xxx is a number which you store in memcacace and need to read only once per transaction. and only store if it changes.

    if you want to invalidate all the group which starts with xxx just increment xxx value in memcache.

    From Niro

Surprising corruption and never-ending fsck after resizing a filesystem.

System in question has Debian Lenny installed, running a 2.6.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card.

The storage is managed via LVM, and is exclusively comprised of ext3 filesystems.

Short version:

  • Running a KVM guest which had 1.7Tb storage allocated to it.
  • The guest was reaching a full-disk.
  • So we decided to resize the disk that it was running upon

We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation:

  • Stop the KVM guest.
  • Extend the size of the LVM partition: "lvextend -L+500Gb ..."
  • Check the filesystem : "e2fsck -f /dev/mapper/..."
  • Resize the filesystem: "resize2fs /dev/mapper/"
  • Start the guest.

The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error.

Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for > 24 hours and there is no indication of how long it will take.

Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring.

So now my question is threefold:

  • Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues.

  • What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual)

  • Ignoring btrfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

  • It seems that volumes larger than 1Tb have problems with virtio:

    https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/574665

    http://kerneltrap.org/mailarchive/linux-kvm/2010/4/23/6261185/thread

    http://sourceforge.net/tracker/index.php?func=detail&aid=2933400&group_id=180599&atid=893831

    From Jim
  • In this case it's probably virtio and the 1TB problem.

    But for me i came accross similar problems while accessing alternately a device outside a virtual machine (including shutdown this machine) and inside the virtual machine. If you access the block device inside the virtual machine with direct access (e.g. in kvm config), this means without cache/buffers and outside with buffers you can get the following problem:

    • resize device outside vm, the cache/buffers get filled on the kvm host.

    • start vm, recognize (other!) problems and shutdown.

    • fsck device.

    If all went very bad you read date from cache but that was changed within the previously run virtual machine which accessed the device without buffers/cache!

    I also do a lot of ext3 resizes (since 2.6.18) and i do this all the time ONLINE! AFAIK this uses kernel functions to resize, while offline resize uses userland code.

  • Also check your KVM cache settings; KVM can do no caching, read cache, writeback, and writethrough caching, which can take you rather by surprise.

    From Rodger

redhat5 and l2tp client

Hi i am trying to setup a VPN connection with my redhat 5 machine the VPN server I am using suggested that l2tp will work much better then my current pptp setup (which does not really usable) any suggestions on how to set this up?

  • short question, no short answer possible ...

    My guess is that whoever suggested L2TP was referring to IPsec+L2TP, for which there are a few tutorials out on the internet. Plain L2TP is not encrypted and hence not very 'private'. Try your luck with http://www.ipsec-howto.org/

    HTH,

    JJK

    From janjust

Is there some advanced traffic shaping frontend for linux?

If you ever worked with Mikrotik routers, you probably got used to 'simple queuing', a very simply manageable list of IP->speed rules. I guess other router OSes have something similar, for those who have never seen any, I link a screenshot: http://wiki.mikrotik.com/images/3/3d/Queue.jpg

Now, this concept is pretty easy and staightforward, and my boss (who started a mid-sized local ISP) was using this for shaping customer traffic ever since. Now we came to a point where mikrotik simple queues no longer scale, mostly because of 3 reasons:

  • any machine we tried isn't capable to work with more than ~2500 rules, especially with speeds reaching above 300Mbit.
  • the main problem - as the network is mostly wireless, we would like a tool that can automatically measure if there's some latency or packetloss happening somewhere, and prioritize/limit traffic so the wireless connection isn't stressed anymore.
  • we would like to somehow effectively distribute spare bandwidth (esp. during nights) to users that will appreciate that, but holding the traffic aggregated to guaranteed speeds when there's peak.

I've gone through the obvious routing software (vyatta, bird,...), but found nothing interesting enough. I'm asking whether there's some free software with such capabilities; and if not, whether anyone here has some experience with those (expensive) Cisco/Juniper/Allot/similar QoS blackboxes and could refer if those can actually help me.

Thanks

e.

  • Try master shaper..

    http://www.mastershaper.org/index.php/MasterShaper

    http://www.mastershaper.org/shaper2/index.php Username: demo Password: demo

    exa : looks working good, but doesn't seem really ready for "enterprise" :(
    From User4283
  • Don't know if you have the functionality in Linux, but FreeBSD has dummynet, which is very flexible. An easy way of getting the functionality of this is to set up a m0n0wall server or device - basically a router/firewall with lots of very stable functionality. You can put your server behind this, and use m0n0wall to do all kinds of traffic shaping.

    Installing m0n0wall on a low end server (or even an old desktop) would get you a lot of what the high end network devices you list give. To support more rules etc, you would need a better hardware of course. The m0n0 docs talk about maximising throughput. I've not seen any tests confirming it, but the principles will probably be helpful to you.

    From dunxd
  • OpenBSD's packetfilter PF has ALTQ wich is known to be a very robust and good solution for QoS.

    The pFsense firewall is a good starting point to test it out (make sure you use the stable 1.x version)

    From pauska