Sunday, January 16, 2011

Incremental backup of site

I want to make periodical archive of my site. I have lftp script to download site content via ftp to today (date +%Y%m%d) directory. What is best way to make an incremental compressed backup without many duplicates?

  • did you try rsync?

    Vlad : i can not use rsync - access only via ftp
    Vlad : and i want make compressed incremental backup
    From quamis
  • Duplicity may fit your needs.

    It is incremental: After a full backup is performed, all future backups are simply difference files. It's important to note that it is the opposite of other backup solutions which store a mirror of the latest state, and difference files to recreate the previous backup points.

    It is compressed: Duplicity is an encrypted backup (perhaps good for you, since you're stuck with FTP?) - and the encrypted file is compressed (as I understand it). You can also bypass the encryption, and simply get a gzipped backup. (--no-encryption)

    It works over FTP: Duplicity can use many remote protocols (including FTP), the problem in you case is that duplicity would need to be run from your server. I do not believe you can use duplicity to backup a remote source to a local destination (just local source to remote destination).

    In your case, if you're not looking for the compression benefit in transferring the data, only storing the data, then you could keep your FTP script, and after the current 'image' is transfered have duplicity backup that temporary image to your existing backup, the delete the image. This way you would have a series of backup files that could be used to restore you site at any backup point, and those files would be gziped archives of only the changes from the last backup point.

    Just a note, every so often it would be wise to do a 'full' backup, since duplicity relies on each incremental backup going forward from a full backup.

    Another solution (assuming again that temporarily storing a FTP'd copy locally is acceptable), would be to simply use rdiff-backup. This would give you a mirror of you site (as of the last backup), and past backups would be stored as the differences going backwards. I'm not sure if those are compressed, but even if they aren't, you would be only storing the changes to files for each backup point.

    Vlad : so i can not use it from local linux?
    Tim Lytle : Not sure what you mean there. I don't believe you can run duplicity locally and backup a remote path (it would make the encryption somewhat meaningless). But you can run duplicity locally and backup to a local path (essentially making gzipped archives of the changes since the last full backup).
    From Tim Lytle
  • backup2l is a very simple tool that builds an incremental zip file, which you can then download via FTP.

MySQL databases and Windows Server Backup (Windows 2008)

Hello,

I have strange problem. I've turned on incremental backup in Windows Server Backup so if I'm not wrong every backup allows to retrieve files from certain point in time. Does anyone can tell me why when I'm recovering MySQL database data folder I'm getting latest backup instead of backup from point of time which I selected to recover?

Thanks

  • I would schedule the MySQL backups separately to the Windows Server Backup, for a number of reasons, not least of all the consistency checks and avoiding corruption if there are I/O errors and the data files are corrupt.

    That way, your Windows incremental backups can take the backup files (as opposed to the data files) and you will be covered.

bond0 and xen = crash

Bonding with xen 1 - Stop all guests. Reboot dom0 after running "chkconfig xend off" and "chkconfig xendomains off". 2 - Configure bond0 by enslaving eth0 and eth1 to it. I added the below two entries to /etc/modprobe.conf.

alias bond0 bonding options bond0 mode=6,miimon=100

Content of /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none

Content of /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none

Content of /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0 IPADDR= NETMASK= ONBOOT=yes BOOTPROTO=static USERCTL=no

Did "modprobe bond0" and "service network restart" after that.

3 - Edit /etc/xen/xend-config.sxp

Change

(network-script network-bridge)

To

(network-script 'network-bridge netdev=bond0')

4 - Start xend. "service xend start".

5 - chkconfig xend on.

6 - modprode bond0

7 - more /proc/net/bonding/bond0

8 - Create guest images as usual and bridge it to xenbr0.

about config i did for my xen kernel rhel 5.3 after i reboot the host server i get in place bond0 get pbond0 and its get disconnect from network only i ping to my vm's on the host server any one have any idea why xen bond0 is acting like that or what is solutions to come out of pbond0 to bond0.

  • I think I had the same problem but it is hard to know without seeing the stack trace from the kernel oops. I think mine was related to a driver issue. Bonding with xen crashed the machine. I found a bug report from RedHat on this. Upgrading to kernel-2.6.18-160.el5.x86_64.rpm or newer fixed it. You can get -160 from:

    http://people.redhat.com/dzickus/el5/160.el5/x86%5F64/

    Try this out and see if it fixes your problem.

    Rajat : thanks i m using kernel-2.6.18-164.el5.x86_64.rpm i got fix that.
    From Tracy Reed

Connecting multiple switches to a router

Hi there,

We have an office network that consists of the following:

1x Vigor 2950 5-port (WAN Load Balancer) 2x Netgear 24-port Managed Switch FSM726 1x PowerConnect 2724 24-port

  • The Vigor has our two ADSL lines hanging off it.
  • Our patch panel connects into the 2 Netgear (for all desktops, laptops, etc...)
  • All servers plug in to the PowerConnect

Currently the configuration is:

  • Netgear 1 connected to Netgear 2 using GB ports
  • PowerConnect connected to Netgear 2 using GB ports
  • Netgear 2 connected to Vigor 2950 using GB ports

Basically, the question I have is this the correct way we should be doing it? We had an instance last week when a user was copying 10GBs of large files from a server on the PowerConnect to his machine on Netgear 1 and it basically killed the network for everyone else except him.

Should I infact be connecting each Netgear and the PowerConnect into the Vigor instead?

I'm not sure what the rules are for connecting multiple switches together and I don't seem to be able to find anything good on Google.

Thanks.

Niklas

  • If you've got more than one connection between the same two switches, either:

    a) Make sure that you're using spanning tree, or
    b) Don't do it.

    Remember, the LAN ports on the Vigor box count as a switch as well. I'm not sure what sort of bandwidth management options the NetGear switches give you, but you may also want to look into that. As far as the actual setup goes, it looks pretty solid.

    sybreon : +1 for STP. Recently had an organisation wide black-out when someone plugged a cable into the wrong port.
    RainyRat : I've done something similar myself. Once the lack of STP bites you, you tend to stay bit...
    kmarsh : STP doesn't work that great on my PowerConnects, or between different brands. See my answer about daisy chaining.
    From RainyRat
  • The proper way of doing it is:

    1. Don't use Netgear and Dell PowerConnect switch for mission critical network operations (Been there, done that, moved on.) Their advanced features just don't work that well, especially when using more then one feature at once.

    2. Don't use a conglomeration of cheap switches for your network backbone. Invest in at least one large managed Layer 3 switch, with real phone tech support and 4 hour replacement. They exist and cost more for a very good reason.

    3. Don't use cheap switches for port aggregation to combine 10/100 and Gigabit Ethernet clients. They will drag down the performance of all Gigabit connections the moment the first 10 or 100Mbit is connected.

    4. Now that you have real equipment that doesn't choke, use EtherChannel (siamesed ports) or Stacking to connect backbone switches together. This will allow more than one user full Gigabit throughput internally.

    5. As RainyRat said, implement Spanning tree on EVERYTHING, even if it slows down recognition of new devices (30 sec instead of 3 sec).

    As you have already discovered, cheap SOHO switches simply don't have the internal backbone to handle serious network traffic. Daisy-chaining them multiplies their limitations.

    EDIT: If you can't afford that, you can try: EtherChannel the two NetGear switches together with a 2xGigabit link, and turn STP on. You can use 10/100 ports to limit your power user's throughput.

    The PowerConnect is a managed switch, but I have found difficulties in utilizing more than one managed feature at a time. You can try to STP on the PowerConnect and EtherChannel to the NetGears, but I'm not optimistic about throughput. When I tried fixed port speed plus VLANs on my PowerConnects, they bricked and had to be hard reset.

    kmarsh : Upper management couldn't understand or believe the price of good managed switches + tech support. Explaining 4 hour replacement helped, but they still didn't get it. Explaining "I can call tech support, put in a research request how to connect X # of switches with Y # of 4-port EtherChannels, STP+ loop-back protection and lock out physical cross-connects of VLANs, and get the correct answer back in a couple of days", that got through.
    Neil Middleton : I don't think this answer is actually helpful - whilst we would all like to be able to replace kit, most people can't to solve a simple problem
    kmarsh : I never said "top of the range kit". HP ProCurve costs 1/10 of Cisco equipment and gives you what you need. See edit above for other ideas.
    kmarsh : Yes, assuming the Vigor+ADSL can handle this. Or, if you wish to bandwidth-limit the PowerConnect users, you can put the PowerConnect downstream of a NetGear. This forces the switches to limit throughput to 10/100 before it puts pressure on the Vigor. In other words, if you have limited throughput, giving power users the best and fastest pathways can be counterproductive. By forcing their traffic down to 10/100 speeds you can keep them from dominating your limited resources.
    From kmarsh

crontab: login name too long

When I try to edit do crontab for a user with a long username on solaris 10 I get this error:

crontab: login name too long

Is this a known problem and is there a solution for it (without changing the username)?

The username is 27 characters long.

  • Hi. I never saw that before.

    But, while we don't discover how to fix this issue, try to edit the crontab file by vi command.

    vi /var/spool/cron/crontabs/"username"
    

    It will help you meanwhile.

    From Pinho
  • After some quick googling it appears that many of Sun's unix tools follow the Unix convention of 8 or less character usernames. It seems that Solaris will allow you to use longer usernames, but it is not a certified or supported configuration.

    innaM : True enough. But not really helpful, is it?
    From Josh Budde

How to report less memory to 32 bit program so it will work on 64 bit Vista?

I have a legacy setup program that will not install on a 64 bit version of Vista with 4GB of RAM. The setup program performs a check at the beginning of the installation to see if there is enough memory. It determines there is "less than 256K of RAM." I assume this is because of a signed 32bit number being used in there math.

I imagine I could take some memory out of the computer and try it. I will as a last resort. But, I was hoping there may be some setting or command line option to get Vista to report less than 4GB to the setup.exe process.

Does anyone know of a way to do this?

  • Use BCDEdit to set truncatememory option. That will limit your memory.

    To use it first check what BCD entries you have with

    BCDEDIT /v
    

    Remember id of wanted entry and then use

    BCDEDIT /set "{id}" truncatememory 1073741824
    

    This will limit it to 1 GB.

    TomTom : Dont forget to undo that after install. This msut be the most ridiculous incompatibility eve.... I hope the software runs at all.
  • Another alternative is to run Windows in a virtual machine like Virtualbox. Then you can sandbox the application and run it with as much or as little memory as you'd like, as well as run with an older version of Windows if you have licensing available to do so (if it's a compatibility issue).

  • One of the available compatibility shims in Windows is "GlobalMemoryStatus2GB". This might be enough. Look in the Application Compatibility Toolkit.

How to pipe stderr without piping stdout

How do I pipe the standard error stream without piping the standard out stream?

I know this command works, but it also writes the standard out.

Command 2>&1 | tee -a $LOG

How do I get just the standard error?

Note: What I want out of this is to just write the stderr stream to a log and write both stderr and stdout to the console.

  • To do that, use one extra file descriptor to switch stderr and stdout:

    find /var/log 3>&1 1>&2 2>&3 | tee foo.file
    

    Basically, it works, or at least I think it works, as follows:
    The re-directions are evaluated left-to-right.

    3>&1 Makes a new file descriptor, 3 a duplicate (copy) of fd 1 (stdout).

    1>&2 Make stdout (1) a duplicate of fd 2 (stderr)

    2>&3 Make fd 2, a duplicate (copy) of 3, which was previously made a copy of stdout.

    So now stderr and stdout are switched.

    | tee foo.file tee duplicates file descriptor 1 which was made into stderr.

    Kyle Brandt : Oh, not tested with ksh, works with bash though ...
    C. Ross : Thanks, works in ksh too. I think most of the pipe and stream things are posix standard.
  • according to the man page for ksh (pdksh), you can just do:

    Command 2>&1 >/dev/null | cat -n

    i.e. dup stderr to stdout, redirect stdout to /dev/null, then pipe into 'cat -n'

    works on pdksh on my system:

    $ errorecho(){ echo "$@" >&2;}
    
    $ errorecho foo
    foo
    
    $ errorecho foo >/dev/null   # should still display even with stdout redirected
    foo
    
    $ errorecho foo 2>&1 >/dev/null | cat -n
         1  foo
    $   
    

Windows 2003 Server Static IP resets to DHCP on reboot

I have a customer who has a Dell Server running Win 2003 R2 Server. This is an existing machine that we had setup with a static IP. Now they want to change it to another static IP.

When they go and change it, hit ok, then reboot. The config is back to the old IP #.

The user is the administrator.

My guess is it has to do with AD & GP, a virus, an already in use IP#, or something else?

Anyone have any ideas what else could cause this behavior?

  • Most servers have at least 2 NICs, are they definitely setting the new address on the right one?

    From MarkM
  • Does it work after changing the IP, before rebooting? Rebooting is not required to change an IP address.

    Is there any anti-virus or security software that may be overzealous in protecting network settings?

    From DrStalker
  • Broadcom NICs? Its a known issue. I can't remember the solution exactly, but if you call Dell support, they should be able to help you in a few minutes.

    Basically this is due to a package not installed properly during Dell automated install (or maybe vice versa - you HAVE to use the OMSA CD to install the OS)

    Just call support - it's a well documented issue, easy to solve

    shaiss : still looking in to this. Our user is going through upgrading their nics. We'll see if that helps at all, then I'll mark this as answered. Thank you
    Hondalex : This is why we don't use the broadcom NICs, we always get Intel NIC cards to save us any trouble.
    dyasny : Hondalex, b-coms are OK when you know their quirks, and this issue has been around for years now - there's an easy and well documented solution available, only one support call to Dell away
    From dyasny
  • I had a similar problem on a Dell PowerEdge 1950 with Broadcom NIC's installed. Running "netsh reset int ip" from a command line fixed it.

    From joeqwerty

Monitoring services for email servers

Does anyone have experience of hosted services for monitoring the status of email servers?

I host my own email on a Linux VPS. If either my SMTP or IMAP servers stop responding to requests, I want to receive an SMS within 10 minutes telling me so.

Better still, I'd like to be able to set up arbitrary TCP banner checks (i.e. periodically connect to a specified port and verify that it receives a specified string). I know that software exists that does this - monit, nagios, etc - but I don't want to host or maintain it myself.

Subscription services are fine (indeed I'd expect to pay something for this).

Update: I previously stated that Pingdom only supports HTTP, but it does actually support SMTP and IMAP as well. Thanks @fmu on Twitter for pointing this out!

  • DynDNS NetMon is $9.99/mo or $99.99/yr and supports {SMTP,IMAP}{,S}. It doesn't seem to support arbitrary TCP banner checks.

    It supports notification to "e-mail or pager aliases"; I'm not sure what a "pager alias" is, but it doesn't appear to support SMS notification. It could be combined with something like Clickatell to provide SMS support.

    (This is answering my own question, but I'd like to see if anyone knows any other services that do this. Also, if you've actually used this service, please edit or comment on this answer.)

    From Sam Stokes
  • You can monitor the services using SNMP solarwinds has a product for this, However if you want to check the actual mail flow, which would be far reaching than just monitoring ports you could write a vbscript to do a mail flow check, put it on a loop say every 5 mins and report any errors.

    From Nasa
  • Pingdom has several price plans including a free plan, supports multiple check types and SMS notification.

    From Sam Stokes
  • I've tried a few of these. Pingdom has some nice features and isnt so pricey but it has it's issues. The way it monitors email does not catch every fault. It may tell you your email is functioning but people are receiving bounce back messages or your emails are not going through. Im researching a few other sites on their effectiveness and will let you know my findings

    From

Terminal Server 2008 not issuing Volume device CAL's

We've a lot of Volume Licences left, but the License Server apparantly doesn't use them. Instead it issues Temporary Per Device CAL's. Which is a bit odd off course...

There are two licensing servers installed on terminal servers, not on a domain controller (these are pushed by SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\LicenseServers registry setting)

alt text

  • You're mixing up your licensing terms. Do you have Windows Server 2008 Terminal Services CAL's installed on your TS license server? Is the license server activated?

    joeqwerty : Have you run the "Review Configuration" from TS License Manager?
    From joeqwerty

"Server Unavailable" and removed permissions on .NET sites after Windows Update

Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites.

What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise.

I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine.

I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5.

  • Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715)
  • Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714)
  • Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86
  • Windows Server 2003 Security Update for Windows Server 2003 (KB958687)

Thanks for any light you can shed on this.

  • This french article is related to the ActiveX rollup. This might be related with the issue. Please check if you see that KB in your updates history

    Also, I think that the change from 2.0 to 3.5 reset the permissions (Even if it souldn't...) So both updates could have change the security settings on your server.

    tags2k : Hi r0ca, that KB has been deployed on the server but that was back on 14th November 2008. Is there a potential conflict between KB960715 and KB956844?
    r0ca : I'm not sure that it could have a conflict. Do you all of them deployed?
    From r0ca

Prevent hotlinking requests by mime

Hi,

I am trying to prevent people hotlinking to PDF AND DOC files. Usually, i would approach this with a .htaccess rule like this:

RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?domain.com/.*$ [NC]
RewriteRule \.(pdf|doc)$ /home/ [R=302,L]

However, many of these files are linked to through php scripts like filedownload.php?id=5 which then trigger the download of a PDF/DOC file. Is there a way to prevent hotlinking to these files via the mime of the outputted file? another way?

edit - added this source to show how files are called:

header("Pragma: public"); 
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false);
header("Content-Type: $ctype");
header("Content-Disposition: attachment; filename=\"".basename($fn)."\";" );
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".$fs);
echo $upload["file_contents"];
exit();
  • I think your rewrite rule isn't going to know what the mime type of the file is since none of the code for the response will have been executed at that stage. I think the best alternative in this circumstance would be to add a referrer check inside of your php code and redirect from there if the referrer isn't from your domain.

    seengee : yep, thats exactly what we've ended up doing. we were just hoping there might be a more global solution without modifying individual files.
    Hans Lawrenz : If you're passing the file name into the php script with a get parameter then you could maybe make a rewrite rule to look at that file name.
    seengee : @hrwl will give you the credit since that is the solution we came to independently

SSH from Windows hangs when using insert mode in vim on Dreamhost: Why?

I have SSH set up using Cygwin on Windows XP SP3 to Dreamhost. It works fine except that when I edit a file with vi and use insert mode (eg press 'i' and type in some stuff). I then try and hit escape and ZZ to save/exit and it hangs instead. My edits aren't saved and I have to kill the session (locally) and kill the vi process on Dreamhost.

This is highly annoying. It's not reliable either. Sometimes it does work.

Also, this happens with PuTTY too.

  • I've had this sort of issue over SSH before, could it be related to software flow control? Try hitting Ctrl-Q (issuing an XON signal) to verify.

    cletus : I'm aware of the ctrl-s/strl-q thing. I'm pretty sure that's not it.
    From Andy

Linux variant of a sysprep answer file

I am installing a large number of Linux systems (RHEL5), from a preconfigured image. If I run sys-unconfig in that image before I distribute it, upon the first run of the imaged system, I get asked a lot of questions - hostname, IP, etc.

My question is how to pass the answers to those questions automatically, the way I would pass an answer file to a fresh sysprepped windows build

  • I don't have RHEL here but on my FC11 in /etc/rc.d/rc.sysinit on line 705 I have the following:

    if [ -f /.unconfigured ]; then
     # some code here to do some things
    fi
    

    So my first answer is unless you modify this file there is no way to automate this. I will try to get a copy of the CENTOS version as is the same as RHEL and confirm that it is the same.

    Granted RHEL may have done things different due to enterprise needs and I will confirm that.

    dyasny : Found the needed fields in RHEL5.3, thanks!
    From Wayne

DNS lookup to AD-DNS from a different subnet

I have a Windows 2008 R2 Active Directory with several domain controllers, all with DNS. This is spread over three locations, interconnected with site to site VPN. All computers and servers are on the same domain.

My problem is that I only get partial results when accessing a DNS server from one of the other sites (on a different subnet).

Example from the same site (subnet) the DC is on:

> serverfault.com
Server:  ad03.mycompany.local
Address:  10.40.49.50

Non-authoritative answer:
Name:    serverfault.com
Address:  69.59.196.212

> devapps.mycompany.local
Server:  ad03.mycompany.local
Address:  10.40.49.50

Name:    devapps.mycompany.local
Address:  10.101.30.152

Example from a different site (subnet) than the DC:

> serverfault.com
Server:  ad03.mycompany.local
Address:  10.40.49.50

Non-authoritative answer:
Name:    serverfault.com
Address:  69.59.196.212

> devapps.mycompany.local
Server:  ad03.mycompany.local
Address:  10.40.49.50

*** ad03.mycompany.local can't find devapps.mycompany.local: Non-existent domain

As seen DNS lookup to public (forwarded) addresses works from everywhere, while local (active directory) addresses only work from the local subnet (not over VPN).

Why does this happen? Is this a security feature of Windows 2008 R2? I presume firewall is not the problem, since both queries go over the same channel.

Edit: I have now enabled debug logs as suggested by John Röthlisberger and I have proven that my packages actually do not arrive at the server. It seems that the VPN setup somewhere redirects my DNS packages to a different server, i.e. my server is not the cause of this problem.

  • The response "can't find devapps.mycompany.local: Non-existent domain" suggests that you are correctly talking to the DNS server so it doesn't appear to be a network or firewall issue. Enable debug logging on the DNS server to get a better idea of what's going on.

WGET Localhost 0 bytes

Hi I am trying to execute wget locally using cron, I have been told by my hosting that due to a local loopback that this won't work?

I am attempting the following command:

wget -q -O /pathtofile/blah.xml "http://myurl/myfeed.php?id=26"

What I am trying to do here is take the output (rss) and save this on my webserver as xml, the way I have been doing this is to open the url and save the source to xml and upload, so I would like to automate this.

Error text:

--12:38:58-- http://www.myurl.com/mydir/myfeed.php?id=26
=> `myfeed.php?id=26'
Resolving www.myurl.com... myip
Connecting to www.myurl.com|myip|:80... failed: Connection refused. 

Is there any thing I can do to achieve this?

  • If you can modify myfeed.php to take command line variables as well as $_POST/$_GET then you can just execute PHP from cron:

    php /path/to/myfeed.php --id=26
    

    See here for more info on passing args to command line PHP.

    You will need to do something like this at the top of your file:

    define('CLI', php_sapi_name() == 'cli');
    
    if(CLI){
       $input =& $argv;
    }else{
      $input =& $_POST;
    }
    
    if(isset($input['id'])) // etc...
    
    From beggs
  • It will work if:

    1. Your web-server is listening on the local interface; and
    2. Your /etc/hosts file has www.myurl.com pointing to the local IP.

    Otherwise, it will fail. If you can do a netstat -untap and confirm that your web-server is listening on the local interface, it should work.

    From sybreon
  • you can also try to use the --bind-address option.

      --bind-address=ADDRESS
           When making client TCP/IP connections, bind to ADDRESS on the local machine. ADDRESS may be specified as a hostname or IP
           address.  This option can be useful if your machine is bound to multiple IPs.
    

    and bind to the external ip, instead of the local one.

    From Drakonen

Check access logs for specific file

I am trying to find out how many times a specific web file has been accessed. I have root access to the server, but not sure where to look. The only place I have looked in is /home/FTPUSER/access-logs which is a sym link to /usr/local/apache/domlogs/perrysre and that access log only has 1 day of data in it.

Any help would be greatly appreciated.

  • Hi, all Apache logs are usually stored in :

    /var/log/apache(2)/access.log

    /var/log/apache(2)/error.log

    /var/log/apache(2)/customs .log

    a nice commande I like in order to count something :

    cat /var/log/apache2/access.log | grep WORD-TO-LOOK | wc -l
    

    then you will have a number. Let's suppose you have a FTP logs, and every time john connects, there is a line 'John opened a session'. So you will do :

    cat /var/log/ftp.log | grep John opened a session | wc -l
    

    will give you how many times did John open a session. If you want period of times,etc you could also do it

    AlberT : Why don't grep directly the file avoiding cat?
    markdrayton : You need to quote 'John opened a session' because it contains spaces. And "grep -c 'John opened a session' /var/log/ftp.log" avoids both cat and wc!
    Kronick : thanks for the new trick :)
    From Kronick
  • I think, since from your question I can't argue the system settings you are operating in, a universal way to find files used by a process can do the trick.

    Try using lsof -p <PID_OF_APACHE_DAEMON>.

    You can retrieve the PID in a number of ways, one can be looking at netstat -tlnop output, another is using lsof -i, and so on.

    This is a POC that can work:

    lsof -p $(lsof -i :80 | head -2 | tail -1 | awk '{print $2}') | grep log
    
    httpd   2618 root  mem    REG  253,0           64072 /usr/lib/httpd/modules/mod_logio.so (path inode=63267)
    httpd   2618 root  mem    REG  253,0           64070 /usr/lib/httpd/modules/mod_log_config.so (path inode=63265)
    httpd   2618 root    2w   REG  253,2    1461  720904 /var/log/httpd/error_log
    httpd   2618 root    6w   REG  253,2    1461  720904 /var/log/httpd/error_log
    httpd   2618 root    7w   REG  253,2    4483  720899 /var/log/httpd/access_log
    

    Here I have assumed your apache daemon is listening on the standard tcp port 80 of course.

    From AlberT
  • If your access log contains only a day's worth of data it is presumably being rotated each day. You'll need to work out how this is configured. If you're using Linux it might be with logrotate -- look in /etc/logrotate.d/ or /etc/logrotate.conf if they exist. On FreeBSD log rotation is configured in /etc/newsyslog.conf.

    Apache might also be doing it via rotatelogs. If so, this'll be set up in a CustomLog line in the server configuration (httpd.conf), which could be in /etc/httpd or, more likely given your log location, /usr/local/apache/conf.

    If none of this works ask the person who configured it!

HP - CommandView - get logs from events from command line?

In Hp - CommandView EVA, is there any way to get the event logs from the command line?

I'm trying to get the logs from the web interface but the button that says "get log" from the event tabs doesn't work :(

Thanks

  • Well after some time i found out that the logs can be locate on the disk. In this path C:\hsvmafiles

    But the file that interests us is on a "binary" format that only HP can analyse...

    Thanks

    From Flip

windows genuine advantage blocking remote desktop

I have a remote server that has just run through a windows update. The server has rebooted and seems to be fine.

However, I can no longer connect via remote desktop. Someone in my office tells me they had a similar problem with a server here and it was a "genuine advantage" tool wanting to be completed.

Is there a way to get the tool to complete its install without visiting the physical server?

  • visit this url http://www.microsoft.com/genuine/ it runs the wga setup. Although It seems odd that this has affected your Remote desktop. Has the update run changed or enabled the windows firewall?

    JohnyV : and on the left is the validate windows this will install the WGA plugin
    Bart B : How is he supposed to do this without physically visiting the server though?
    JohnyV : I missed that part sorry....
    From JohnyV
  • I just had the exact same issue. Setting up a XP Pro client to connect to Terminal server.

    I did notice that whenever I logged into the admin account on the client it kept asking me to complete the installation of the genuine advantage tool, but I kept clicking cancel. For some reason I could not connect to the terminal server however network was up 100%.

    So as a last ditch effort I completed the installation of the genuine advantage tool and hey presto remote desktop connection now connects to the terminal server.

    Interesting issue, but on not visiting the physical server, hmmm.... thats a stupid one because sometimes the tool needs user intervention but if you can't connect remotely, all I can say is "good one" MS.

    Can't try this myself but try sending an administrative restart from the client. Could be that the WGA hasn't finished installing yet.

  • Try connecting with the /admin or /console switch (depending on what version of the client you have)

    mstsc /admin
    mstsc /console
    

    Nothing will appear different about the client from there on in, that's normal.

    This should allow you to connect to the console session, where you should be able to complete WGA.

    From tomfanning

How to manage the way router shares traffic

I just came here from stackoverflow, hope it's the right place to ask this question.

We share internet in our apartment via a Linksys router (2 people are connected with wireless and 2 people with Ethernet cable). The channel is 15 Mbit, which should be enough, but when someone starts to download something on max speed, no one else can even open a web page. Is there a way to tell the router to share traffic among peers? Or it does not distinguish between connected computers and just redirects packets?

Another solution would be using a download manager with speed control but my roommates are lazy :)

Thank you.

  • Probably you can install DD-WRT firmware on your router. It has 2 types of QoS: HFSC and HTB

    DD-wrt QoS

    It also has:

    • service based priority
    • mac based piority
    • ip based priority
    • port based piority
    Mecki : HFSC is broken in DD-WRT AFAIK (sometimes more, sometimes less, but they admit that in their own wiki and forum), thus everyone is highly recommend to use HTB until they finally fix this once and for all. Just a little tip (using DD-WRT myself at home)

Will the use of domain controller be beneficial over using workgroup in a IT company of 10-15 people.

We are a company of 10-15 people and are planning to take a server with windows 2008. We would like to know if it is beneficial to use the domain controller instead of configuring a workgroup. What are the pros and cons of each and how easy is it to migrate from workgroup to Domain Control if at a later date we plan to switch between them.

We don't plan for a System admin on board (in the near future) hence our objective is that the login mechanism to be simple to use with lesser troubles.

  • As far as I understand, domain control would make life easier for at least the following tasks:

    • single login on any machine, if people sometimes use different computers;
    • implementing and changing common policies, security, printer settings, etc. for all/group of computers;
    • the possibility to tie login to any other software that supports AD/LDAP, such as project management tools, some version control software, etc.
    • auditing

    Whether it is worth it in a 15 people company, probably depends. Last time I worked in a 10 people company I didn't use it, however, now that I work with 400+ people I find it extremely useful and would quite possibly do it in a 10ppl company, too.

    From Gnudiff
  • The more people you have in a network, the more difficult your transition to a domain will be.

    I would not recommend Windows Server 2008 for a small company. A system based on Small Business Server 2008 would be more appropriate. Its cost is comparable and it includes many features that are simply not available in standard server. This includes remote access features as well as Exchange Server for shared Calendars, Contacts, Tasks, and Email.

    Domains provide many benefits:

    • Centralized logins. With a workgroup you have to define user accounts on each system and to permit easy access to the server, the accounts must be identical as well as the passwords. With a domain, you create one account and it's used on all systems.
    • Centralized Administration. You can manage all workstations remotely as well as the server with ease and create policies to configure all workstations at boot.
    • Vastly improved security. Much easier group management and assigning of permissions to use.
    • Software deployment. For example, create a share, a Group Policy package, and you can install office on all your computers without ever having to touch the machines.

    And I'm probably forgetting others. For me, Domains provide easily configured networks that frankly, I rarely have to touch more than typical windows updates.

    Chris W : +1 for mentioning SBS - far more cost effective for small Co's are too often overlooked or not recommended by IT providers in my experience.
    tomfanning : Ugh, SBS. It's a total nightmare to migrate from SBS to Standard edition when you grow. If you foresee growth, don't get SBS.
    Multiverse IT : How do you figure? What's so difficult about it? And why wouldn't you migrate to EBS after SBS? If you know the product, I don't see any difficulty in migrating - other than a possible loss of Remote Web Workplace and reports... but migrating to standard server should be pretty easy... add it as a DC and demote the SBS box off the network. That assumes you don't use the transition pack. What was your experience.
  • I presume you're not planning on growing head count suddenly in which case taking an SBS server on board would be a great place to start - it really isn't difficult to administer and you certainly shouldn't find yourself needing a dedicated admin to look after it unless your looking to do anything other than the very basic vanilla install that the wizards will help you with. There's a lot of great communities out there that support it that will answer any questions you may get.

    From Chris W
  • There is a crossover point where the additional overhead of managing a domain has to be balanced against the overhead of not having one, so it depends on how you work in your current environment. If you find that you're sharing a lot of data between users and fooling around with multiple local accounts, then yes, a domain is going to give you substantial benefit.

    From mh

Switch Fabric Optimization

How does an ethernet switch work on a port-by-port basis? Specifications are given for the speed of the switch "fabric", but is that universal bandwidth through a central processor or are there optimal places to plug things in where the traffic won't have to hit the main bus?

Example: If I have two ports that will be talking to one another a lot, will it be better to put them next to one another or will it be better to put them on different blocks of ports?

  • Depends a little on the switch but generally it makes sense for them to be on the same card if possible.

    From Chopper3
  • As Chopper3 says it will depends on the switch, it also depends on feature used (vlan, acl, mulicast, etc.).
    On a sigle card switch like Cisco 2950 or 2960, even if the hardware combine port 4 by 4 (from what I remember) putting 2 server on a group of 4 port or on 2 groups might reduce latency this would probably be imperceptible.
    On a stack switch like Cisco 3750, putting both server on the same stack unit is better, you will have smaller latency and don't use bandwidth on the stack ring.
    On a chassis like Cisco 6500, you definitely want to put both server on the same card.

    In any case putting both server on the same group of port can only be better for performance. Keep in mind that if it's better for performance is worth for security because if you have an hardware problem on the group of port, one unit of the stack, on the line card, ... you will loose both server. So for HA or something like this, it's not a good idea.

    The best is still to make test with your switch, measuring such small latency need special ethernet tester.

    From radius
  • It all depends on the chip hardware. There are lots of research done on the design of switching fabrics because these things often trade area space and cost for 'switchability'. For example, a fully switchable switch is a cross-bar but that increases the cost exponentially with the addition of a port.

    However, I feel that if your application requires optimisation down to that level, you may be better served by looking at other avenues instead. Maybe compressing your data so that it consumes less bandwidth or even running something entirely different.

    From sybreon