Sunday, January 23, 2011

Simplest way to respawn configured number of instances of a specific process.

So we have an app. which we wan to run multiple instance of it in linux. The number should be configurable. We also want that whenever one of the instance disappears, a new one is booted up.

I was looking into C based programs, shell script, python script etc. but I was wondering what would be the most simple, easiest way to do it. Are there any tools out there? Can one simply use some linux built-in functionality?

Linux distribution is Red Hat.

  • Monit is the tool for the job. With monit, you can control a lot of variables and act upon changes. More info here.

NameServer SOA records misconfigured

This is my config of NS.

hostingdk.com. SOA zone1.hostingdk.com admin.hostingdk.com
2010051905;
43100;
7200;
2419100;
86400;

hostingdk.com. NS zone1.hostingdk.com.
hostingdk.com. NS zone2.hostingdk.com.

zone1.hostingdk.com. A 96.30.49.11
zone2.hostingdk.com. A 96.30.46.238

Both zone1 & zone2 have registered name server in Enom domain control panel.

My problem is, one domain .lv cant not change DNS to my NS. They said:

Error : Nameserver zone1.hostingdk.com cannot be queried for SOA
Error : Nameserver zone2.hostingdk.com cannot be queried for SOA

Please help me, how to fix it ?

  • If your are in bind format, the SOA must not have semicolon between the fields. In my case, it is :

     @       SOA     dns1.grenoble.cnrs.fr. dnsmaster.grenoble.cnrs.fr. ( 2010051802 3600 900 604800 3600 )
    
    From Dom
  • The error being reported is because your two servers (zone1 and zone2 above) are not correctly serving your zone file:

    % dig +norec @96.30.46.238 hostingdk.com. soa
    
    ; <<>> DiG 9.6.0-APPLE-P2 <<>> +norec @96.30.46.238 hostingdk.com. soa
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 5139
    ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
    

    This may be because of the semi-colon issue pointed out by @Dom - in which case the server logs on those two boxes should tell you that. If you're running BIND, use named-checkzone to check the syntax of your zone files.

    If you've actually got the right syntax now, but it's still not working, you need to look at the ACLs in your server - make sure that you're actually permitting access to that zone from 0.0.0.0/0 (aka "any").

    From Alnitak

How to remove this attack from the server?

Hey Guys
I have made website.
But after few months successfully run now its showing virus attack.
Now how to remove this things?
And what to do to avoid this attacks in future?
I have put screen-shot so that u can understand well.

  • If you removed virus and warning is still shown you should go to google webmaster tools and request malware review.

    More info here: http://www.stopbadware.org/home/reviewinfo

  • The only satisfactory solution is to reinstall from a backup taken when you knew the machine was clean. If you don't have a backup wipe it and start again. Properly and fully removing a virus is seldom a simple job, despite the claims made by antivirus software vendors.

    I suggest you enlist the services of an experienced system administrator to help you fix the problems you have and to secure the server a lot better than it is now. This is not a job for the inexperienced, unless you want to go through this again... and again...

SELinux vs. AppArmor vs. grsecurity

I have to set up a server that should be as secure as possible. Which security enhancement would you use and why, SELinux, AppArmor or grsecurity? Can you give me some tips, hints, pros/cons for those three?

AFAIK:

  • SELinux: most powerful but most complex
  • AppArmor: simpler configuration / management than SELinux
  • grsecurity: simple configuration due to auto training, more features than just access control
  • Personally, I would use SELinux because I would end up targeting some flavor of RHEL which has this set up out of the box for the most part. There is also a responsive set of maintainers at Red Hat and a lot of very good documentation out there about configuring SELinux. Useful links below.

    Rook : yeah but yum and selinux are so damn annoying.
    Ophidian : I find yum's CLI significantly more intuitive than apt. SELinux is annoying when you're trying to go your own way with non-stock apps, but I've never had issues with the stock stuff beyond needing to turn on some sebool's to enable non-default functionality (e.g. Let httpd php scripts connect to the database)
    From Ophidian
  • A "server" to provide what kind of service? To what audience, in what environment? What constitutes "secure" to you in this context? Lots more information would be necessary to provide a useful answer. For instance, a pure IP Time-of-Day server can be very secure -- all ROM firmware, radio imput, self contained battery power with automatic charging. But that's probably not a useful answer for you.

    So, what kind of service? Internet wide, enterprise wide, trusted work team, dedicated point-to-point networking, etc.? Is high availability a need? Reliability? Data Integrity? Access control? Give us some more information about what you want, and recognize that "secure" is a word whose meaning has many dimensions.

    From mpez0
  • I have done a lot of research in this area. I have even exploited AppArmor's rulesets for MySQL. AppArmor is the weakest form of processes separation. The property that I'm exploiting is that all processes have write privileges to some of the same directories such as /tmp/. What nice about AppArmor is that it breaks some exploits without getting in the user/administrators way. However AppArmor has some fundamental flaws that aren't going to be fixed any time soon.

    SELinux is very secure, its also very annoying. Unlike AppAmoror most legitimate applications will not run until SELinux has been reconfigured. Most often this results in the administrator misconfiguration SELinux or disabling it all together.

    grsecurity is a very large package of tools. The one i like the most is grsecuirty's enhanced chroot. This is even more secure then SELinux, although it takes some skill and some time to setup a chroot jail where as SELinux and AppAprmor "just work".

    There is a 4th system, a Virtual Machine. Vulnerabilities have been found in VM environments that can allow an attacker to "break out". However a VM has a even greater separation than a chroot becuase in a VM you are sharing less resources between processes. The resources available to a VM are virtual, and can have little or no overlap between other VMs. This also relates to <buzzword> "cloud computing" </buzzword>. In a cloud environment you could have a very clean separation between your database and web application, which is important for security. It also maybe possible that 1 exploit could own the entire cloud and all VM's running on it.

    From Rook

.htaccess help required for apache server

I searching for a redirection code for my url:

what I want is when some one search in my site it should redirect

example: if some one search google.com on mysite

then in address line it should look like www.mydomain.com/google.com

can be in $_POST method or $_GET

how do I do that??

  • RewriteEngine On
    RewriteCond %{HTTP_REFERER} google\.com
    RewriteRule ^.*$ http://www.mydomain.com/google.com [R,L]
    

    in an .htaccess file might work, i have not tested it.

    you could try:

    RewriteCond %{THE_REQUEST} ^[A-Z]+\ /(.*)\/search\.php\?q=(www\.)?([^/\ ]+)[^\ ]*\ HTTP/
    RewriteRule ^.*$ http://www.mydomain.com/%1 [R,L]
    

    this link also has some other example 'smart' .htaccess rules: http://www.askapache.com/htaccess/http-https-rewriterule-redirect.html

    mathew : RewriteCond %{THE_REQUEST} ^[A-Z]+\ /search\.php\?q=(www\.)?([^/\ ]+)[^\ ]*\ HTTP/ this is for request of any kind of search domains..but what I dont know is how do I convert to http://www.mydomain.com/domain.com
    cpbills : added another potential answer
    mathew : nop it doesnt work
    cpbills : does it do /anything/ can you provide more information as to how it doesn't help? maybe enable logging for RewriteRules `RewriteLog file-path` and `RewriteLogLevel 9` http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html you also have to be willing to play with the regular expression in the `RewriteCond` to see if you can at least get it to trigger. i don't know where you got that pattern, so i have no idea if it should work or not, you provided it.
    From cpbills

Can IIS6 compression file types be configured on a per-site basis?

The following article explains how to customise the file types that can be compressed in IIS 6:

Customizing the File Types IIS Compresses (IIS 6.0) [MS TechNet]

The metabase settings discussed are global settings.

Can I configure this on a per-site basis?

  • Yes, you can.

    See "To enable HTTP Compression for Individual Sites and Site Elements" here.

    Edit: I misread the details of the question. I am pretty sure I have configured different file extensions for compression on different sites in the past, but I also can't seem to find any definitive answer right now. I'll check when I'm at work tomorrow.

    Kev : I read that. That's just the settings to turn on/off compression by site. There's no mention about whether you can customise the file types on a site by site basis.
  • After some digging about it looks like this is a global setting and can only be configured at the following metabase location:

    /LM/W3SVC/Filters/Compression/gzip
    /LM/W3SVC/Filters/Compression/deflate

    For more info:

    HcFileExtensions - MSDN Library

    From Kev

How to take snapshot of the filesystem in linux (files, their sizes only, not their data)

We need to take a snapshot of your linux server. We don't want to backup the data, just a snapshot we can compare against changes.

  • i would recommend checking for md5 sums of files, and not just file size. however:

    find / -printf "%h/%f %s\n" > /some/path/filesize would generate a list of files and their sizes.

    you could also do find / | xargs md5sum 2> /dev/null 1> /some/path/file-md5s to generate a list of filenames and their md5 sums.

    From cpbills
  • use

    du <filesystem mount point>
    

    it should give you size and file in full location

    From A.Rashad
  • find / -ls > fileinfo.txt
    

    But take a look at tripwire and aide, since that seems to be what you are aiming for anyway. Also note that rpm can check files against checksums and for debian based distributions there is debsums.

    Pier : Agree. Probably tripwire and aide are what he looks for
    From ptman

Setting up linux server with multiple access rights

I am a graduate student and want to set up a linux server (preferably Ubuntu) in my office. I also want to give my friends SSH access to that box.

My question is can I set up my server such that I can give one of my friends rights to install software on my machine but he cannot brows around outside the directory he is allowed to?

Can I set up multiple apache instances (on different ports) for different people? so each has access to their own apache instance?

  • you can do this with finely controlled access in the /etc/sudoers file. as root, you will want to run the command visudo and add something along the lines of:

    username         ALL =  (root) /usr/bin/apt-get update,        \
                                   /usr/bin/apt-get install
    

    or:

    username         ALL =  (root) /path/to/yum install
    

    depending on if you're using centos or some other distribution that uses yum or debian/ubuntu, which uses apt and apt-get

    those lines, in the /etc/sudoers file would allow username to run the commands /usr/sbin/apt-get update and /usr/sbin/apt-get install [packagenamex] or /path/to/yum install [packagenamex] as the root user, and they will be prompted for /their/ password, not root's. they will have no other privileged access to the machine.

    beyond that, most packages can be compiled from source with commands like:

    ./configure --prefix=/home/username
    make
    make install
    

    which will install the package to their home directory, usually creating a ~/bin ~/lib and ~/usr, etc directories.

    so maybe ./configure --prefix=/home/username/local or something would be more appropriate.

    for setting up apache httpd, to allow each user their own control over their own virtualhost, etc, without running multiple instances, you can add an option to the apache configuration, something like /etc/apache2/apache2.conf, a line that says:

    Include /home/*/httpd/user.conf
    

    the configuration file can be named whatever you want, whatever might be more appropriate, but what this tells apache is to look in /home/*/httpd/ (where * is translated as a glob to whatever subdirectories are under /home) for a file called user.conf where you can permit your users to add information about VirtualHosts

    a normal user could install or configure apache to run out of their home directory on a non-privileged port, if you wanted to grant them access in that way. a non-priv port being anything over 1024, they would have to add a directive to their personal apache configuration saying something like Listen ip.add.re.ss:8888 starting an apache httpd server running on port 8888

    to be sure they cannot browse into your, or anyone else's home directories, make sure they are set chmod 700 or chmod 711 (to allow apache httpd access to execute their directory, to get through to /home/username/public_html if you want to have user dirs in apache) you can test this by doing ls -ld /home/username it should show:

    drwx------ 185 username users 36864 May 18 17:05 /home/username/
    

    for permissions 700, and drwx--x--x for 711. if it shows up drwxr-xr-x then you will need to run chmod 700 /home/username or chmod 711 /home/username

    cpbills : added information about apache and allowing multiple users personalized access.
    From cpbills

CSR, SSL and load balancers?

Hi folks,

do I need to generate a CSR on the load balancer or on the individual servers?

  • Depends on whether you'll be terminating SSL on the load balancer or web servers...

    In general, if your load balancer can handle it, then better to do it all there and take the load off the web servers. Also it allows quicker deployment of new servers as it's one less step to worry about.

    Having said that, once you have your private key and ssl cert from the provider, you can back these up and use them wherever you like (on LBs or servers), so you won't be tied to one method or the other permanently.

    Warner : What? They're asking about the certificate request.
    Robbo : Yes, and I added some relevant thoughts around using load balancers for SSL termination and finished by saying what you did.
    Warner : I didn't down-vote you, I rarely down-vote. CSR != CRT. See: http://en.wikipedia.org/wiki/Certificate_signing_request
    From Robbo
  • You can generate the CSR anywhere. The certificate generated will need to be in a format that the device using it can utilize. Typically, that will be PEM.

    From Warner
  • CSR is a bunch info (like DN, expiration dates, CommonName) in addition to Public Key. Download openssl library and do the tricks mentioned here.

    http://www.rapidssl.com/ssl-certificate-support/generate-csr/apache_mod_ssl.htm

    Once ya get the cert, make sure you copy the private key, cert, along with the CA cert (or create a chain cert), since custom applicatins don't often update their root certs.

    From RainDoctor

Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record?

I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it.

Is there any way to make the OS check the rest of the search path after it finds the username?

  • Well, the problem lies here: how pam modules pam_ldap.so and pam_unix.so are stacked. pam_unix.so deals with both nis and local files.

    pass debug argument to pam_unix.so pass "debug 4" argument to pam_ldap.so

    Append these arguments to every line that got these modules in system-auth file.

    From RainDoctor

What's better for deploying a website + DB on EC2: 2 small VM or a large one?

I'm planning the deployment of a mid-sized website with a SQL Server Standard DB. I've chosen Amazon EC2 to deploy it. I now have to choose between these 2 options:

1) get 2 small instances (1 core each, 1.7 GB of ram each): one for the IIS front-end, one for running the DB. Note: these "small instances" can only run the 32-bit version of Win2008 Server

2) a single large instance (4 cores, 7.5 gb of ram) where I'd install both IIS and the SQL Server. Note: this large instance can only run the 64-bit version of Win2008 Server

What's better in terms on performance, scalability, ease of management (launch up a new instance while I backup the principal instance) etc.

All suggestions and points of view are welcome!

  • This seems to be a little bit of a budget decision.

    I would take the large instance because You have more reserve, memory wise as well as cpu wise. Also I have read about the small EC2 instances becoming sluggish. A bit of headroom can't hurt.

    There also are additional cores so the cpu load of running backups might have an even tinier performance impact.

    Additionally You save one instance of Win2008 server 2008 which is cost and the associated cpu and memory overhead for running the OS two times. I have to admit that I don't know the pricing model of Win2008 Server. (Cost per CPU, thread or socket or ...)

    If You ran into saturation of the large VM, this would have occurred far earlier with the little VMs as they aren't even half the specs of the big one.

    Last but not least, if You really have to launch another instance for Backup, You only have to launch one instance.

    So with windows as the os I don't really see a benefit from splitting the workload over two tiny isolated VMs.

    devguy : thanks for the comment. It's not really a budget decision, since we already own the licenses, the the two setups differs for about 80-90€/month...so that doesn't impact a lot. I'm mostly concerned about difficult scalability options...since I'd probably need to switch to a setup like #1 (but with more powerful machines) to be able to add more front-end servers keeping a single/shared powerful DB machine.
  • The two images is probably going to be better for scalability, administration, and general management.

    The single image is probably going to be cheaper, especially if you never have to scale this site out much.

    Performance will depend largely on your implementation, but will likely be similar on both setups. The single image has more RAM and processing cores; this may be very important to your implementation (or maybe it will make no difference in the slightest).

    devguy : I'm interested in this part "Performance will depend largely on your implementation, but will likely be similar on both setups". How can they be similar? Setup #2 runs both services, but has 4 times the ram and cpu...and there's no network latency to bring the data from a separate server. My main concerns are mostly about scalability (how do I attach a new IIS machine from the same image, since it also includes the DB??)
    Chris S : The network latency between the two "machines" is likely to pale in comparison to the Internet latency of the client. Unless you're implementation is actively accessing more than ~1.5GB of DB data, the DB server's RAM is less important that the underlying disk storage, which is likely to be the same on both. Multiple machines can not run off the same underlying image; not yet anyway. If you went with setup #2, and added another IIS VM later, it would have to be different. I'm sorry for being vague, but I don't know the details of the application.
    devguy : Thanks Chris for the additional info. That is actually what I'm considering about setup #2. I have always read however that's best to have all the ram possible on the DB server (also on some Stack Overflow posts by Jeff) and that's why I thought it would be better to have a large machine with a lot of ram to run the DB. The IIS would be pretty light anyway. Sure there's the problem of scalability this way...but I hope the large machine would suffice for quite some time...until a decent initial success at least...
    deploymonkey : I looked for disk performance on EC2 and fount 2 interesting tests. There are People that run benchmarks on EC2. Just have a look. http://blog.mudy.info/2009/04/disk-io-ec2-vs-mosso-vs-linode/ http://stu.mp/2009/12/disk-io-and-throughput-benchmarks-on-amazons-ec2.html And considering the decision what to opt for, just run a simulation if You can, and You will notice, if disk io or CPU is what limits user numbers first. You might even be okay with one smaller instance. Depends on Your app.
    From Chris S
  • The two previous answers give some good decision points, but one thing not mentioned is your site availability requirements - if you use either of the architectures you suggested, can you tolerate your site being down while you relaunch a crashed EC2 instance ? (startup times are especially long for Windows instances; I've seen it take up to 30 minutes)

    Whichever way you go, I recommend storing your database on a separate Elastic Block Store volume so that you can easily reattach it to a new instance in case of failure. EBS volumes are also easy to back up using the snapshot facility provided by AWS.

    devguy : yes, DB and website file would be on a separate EBS volume...so that I can also startup a new instance, attach it to that EBS volume, and stop the previous instance
  • As You just mentioned that You can throw money at the problem and that You anticipate some scaling, go with 2 instances. That way You can gain experience with separation of services and have a better starting point for profiling and benchmarking Your services.

    You might even want to migrate Your DB to OSS at a later point which is easier that way.

    (informative: cloning and duplicating instances in EC2 is possible. This article is for linux, but maybe it gives You a hint about how to make a running copy of Your installs.)

  • I was once told by a very clever network architect which I respect a lot, that keep each machine as simple as possible. Always!

    So, I would go for small instances, seperately - once they become too small, consider upgrading them or spawning extra instances.

    Because you have them splitup from the beginning, its easier to put in extra power where needed instead of paying too much for the wrong setup.

    It becomes a bit harder to maintain and backup more images, but you also gain the benefits of more scaleability I think.

    We have run with similar setup for quite some years now, running on VMware and the SQL-server is seperated from the 2 IIS machines.

    We even have a secondary SQL now, and thats possible because we also could link them for sync purposes.

    devguy : very correct about the scalability. I'm just a bit worried about two things: 1) the setup would be more complex, and there's the possibility that is would be more complex for nothing, if we don't reach a point where the 2 instances are not enough 2) I'm worried that the total performances of the 2 machines would be sensibly less that the performances of a single large machine. I think the single large machine could last quite longer than the 2 small...

How to point www.example.com/testing/ to another hosting

i have 2 domain, 1 is old domain, 1 is new domain

i setup my new site in new domain, but i wish to have the new www.example.com/testing/ redirect to the old domain's folder.

can it be done? how?

How to downgrade a certain ubuntu 10.04 package (php) back to karmic

Hi! I've updated from 9.10 to 10.04 but unfortunately the PHP provided with 10.04 is not yet supported by zend optimizer. As far as I understand I need to somehow replace the PHP 5.3 package provided under 10.04 with an older PHP 5.2 package provided under 9.10. However I am not sure whether this is the right way to downgrade PHP and if yes, I don't know how to replace the 10.04 package with 9.10 package. Could you please help me with that?

nginx proxying different servers for different subdomains

I'm trying to use nginx as a proxy so that http://stuff.theanti9.com/ goes to a seperate computer and everything else goes to a local instance of apache (which would be accessed by http://theanti9.com or http://www.theanti9.com). I tried configuring it, but when i go to my domain, I just get the "Welcome to nginx!". Here's what I have:

user www-data;
worker_processes  1;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;



    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;

    include /etc/nginx/conf.d/*.conf;
    server {
           listen 80;
           server_name theanti9.com www.theanti9.com;
       access_log   /var/log/nginx/access.log;
           location / {
                proxy_pass http://localhost:8000;
           }
    }

    server {
           listen 80;
       server_name stuff.theanti9.com;
       access_log   /var/log/nginx/access.log;
       location / {
            proxy_pass http://192.168.1.102:80;
       }
    }
}

I'm not really sure what's wrong. Any suggestions?

  • check the files inside /etc/nginx/conf.d. Something might've overridden your settings

    The.Anti.9 : conf.d is empty.

Public/Private IP address

We have several websites (with several public IP addresses) running on a web server. In IIS, the IP address are internal IP addresses (192.168.xxx.xxx). How do I figure out which public IP address matches which internal IP address? My goal is to change some public IP addresses. The particular web server is running IIS 6 on a Windows 2003 Server. Thanks, in advance, for your help!

  • Why not just log in to the server, pull up IIS Manager, and look at the bindings for each website?

    From Chris S
  • Are your servers multi-homed or do you have the addressed NATed? If they're behind a NAT boundary (firewall?) why not just look there for the mapping?

    From MarkM
  • You must have a port forwarder or other device routing connections from the external public IP addresses to the internal addresses. Your best bet would be to get access to that device and look at the configuration mapping public to private.

    If all the IIS machines run the same web applications, you may have a load balancer handling the connection routing. In this case, your problem is perhaps simpler where you simply have the load balancer listen to the new addresses, but continue routing to the same pool of private machines.

    Based on your comment, you may need to add a Host Header definition to your IIS configuration. If you have two websites listening on the same port (e.g. 80), you need to tell IIS how to direct traffic to each site. You do this by telling it which host address is handled by each site. (I only have an IIS 5 server to look at, but the settings for this should be similar)

    1. Right click the web site and select Properties.
    2. Select the Web Site tab
    3. Next to the IP Address, click the Advanced button.
    4. In the Multiple Identities for this Web Site, edit the entry and set the Host Address to the host name of your website. For example if you access the web site at http://www.example.com, you would set the Host Address to www.example.com. Save the settings and restart IIS if necessary.

    Now the certificate issue is another problem and may actually be the source of the 400 errors. In order to decrypt the request, IIS needs to know the key to use. Since the entire request is encrypted, the only thing it can use to determine which key to use is the port on which the request arrived. If you have more than one SSL/TLS enabled website on the server, you will need to have each one listen on different ports and your firewall will need to know to route the request to that port. This also means your firewall will need to route specific public IP addresses/ports to the specific port for the private IP address.

    Charles : Thanks for your good suggestions. We do have a firewall that load balances a couple of websites. This particular website runs on one source webserver and I want to move it to another destination webserver. I used the same IP address as that of another website on the destination webserver, but I got a HTTP 400 error message and the home page wouldn't load. I was using IIS Manager (but it was only showing the internal IP address). I'll check the firewall. Also, the website has a SSL certificate. If there's anything else I should check, please let me know. Thanks to all of you!
    David Smith : I updated my answer with more information based on your comment.
  • Are you doing 1:1 NAT (or it may be called Virtual IPs) in your edge firewall? That would be one way to tell what public IPs map to private IPs.

    However it's likely that you're just using host headers in IIS for each website: do an nslookup server 8.8.8.8 and then lookup the A record for each domain listed (I'd do the www host as well) and the IP(s) that they resolve to will tell you what IPs are being used for your websites.

    I put 8.8.8.8 (Google's nameserver) in the nslookup example in case you have split DNS setup internally; this will make sure that you're getting the public IP not an internal IP.

    From gravyface

How hard for a Software Developer to Maintain a Server

I'm a software developer and don't have much experience as a sysadmin. I developed a web app and was considering buying a server and hosting the web app on it.

Is this a huge undertaking for a web developer? What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff. I'm a single user, and not planning to sell the service to others.

Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

  • CentOS is fine, but put it on a local system first and destroy it a few times.

    Also, RUTE.

  • If you are going Linux/Apache and you want to keep it as simple as possible, I'd suggest going with Linode (http://www.linode.com/) and use their Ubuntu images. Linode is a VPS provider and provides a great number of tools to automate things such as backups and let you manipulate your system through an http based console if the need arises. You'll have root access. You won't ever have to deal with anything hardware related and very rarely will you have to deal with anything network related.

    You've tagged the question as Centos related, but I typically find Ubuntu much simpler (and ubuntu.com's documentation is fantastic). Installing apache / php / mysql is pretty darn easy with apt-get (or aptitude). For documentation on Ubuntu, refer to http://www.ubuntu.com/products/whatisubuntu/serveredition and https://wiki.ubuntu.com/

    I don't think you'll have much problems doubling as a sysadmin if you remember to "automate everything" - I've done this before in a previous life. Learn how to write bash scripts (or scripts for the shell of your choice). Putting on a sys admin hat as a developer is a very useful exercise. It'll help you both appreciate the work admins do and also make you tailor your development processes to make life easier on admins.

    Ignacio Vazquez-Abrams : Easier than `yum install php-mysql mysql-server`?
    From whaley
  • Let's start with some simple answers...

    Is this a huge undertaking for a web developer?

    Depends on your level of experience and how comfortable you are with breaking and fixing things.

    What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff.

    Package updates (I refuse to endorse source-based distros like Gentoo anyone who's not already a guru) are easy; securing web apps can be quite difficult, depending on what functionality you're trying to achieve. Web Application Exploits and Defenses is an interesting exercise in teaching developers to write secure applications, once the basics like PHP security and SQL injection are out of the way.

    Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

    Ubuntu is fairly newbie-friendly but may be too "simplistic" for some people's tastes. Both the distro and the community have a few strange ways of doing things, but filtered through a reasonable degree of cluefulness you should be able to achieve almost anything.

    It's a good idea to try and find a community that you can ask questions of - IRC, and specifically the Freenode network is good for anything open-source related - and forums are good for almost anything, if you can find the right one. Real People Who Know Things are also invaluable when starting out.

    From Andrew

SVN problem on OS X: Mismatched RA version

When I run any svn command on my Mac, I get messages like the following:

$svn help
svn: Mismatched RA version for 'neon': found 1.6.2, expected 1.6.5

$svn checkout /some/repo
svn: Mismatched RA version for 'http': found 1.6.2 expected 1.6.5

What did I do, and how do I rectify this problem?

  • Looks like not all of your SVN client was upgraded from 1.6.2 to 1.6.5 (namely the neon package, which is a HTTP/WebDav library).

    A bit of a google on this, lead me to these instructions:

    Check if you have neon by running:

    which neon-config
    

    If you have neon, a path to neon-config will be outputted. Everything before /bin/neon/config is your neon home directory.

    The neon version needs to be 0.25.x or greater. Check the neon version with:

    neon-config --version
    

    If you have a suitible version of neon, make a note of the neon home directory for use in the last step, Install Subversion Itself.

    If you don't have neon, or need to install a newer version, get a recent copy it from the WebDAV website in a .tar.gz archive. Install it with:

    cd /research/oranfry/sources
    tar -xzf /path/to/neon-X.X.X.tar.gz
    cd neon-X.X.X
    ./configure --prefix=/research/oranfry/neon make make install
    

    Remember the neon home directory. In my case it is /research/oranfry/neon.

    (your milage may vary, be careful of paths)

    From Farseeker
  • If you had installed the Subversion from Collabnet on your Mac, and are getting the above error, you're probably running svn installed with your Mac OS X. Try this command:

    which svn
    

    If you get /usr/bin/svn, that's the old version causing the error.

    You need to add this line to ~/.bash_profile:

    export PATH=/opt/subversion/bin/:$PATH
    

    Log out, log in, and try the which command, it should point to the new version.

    NOTE: The Collabnet installer says to put the export command into ~/.profile, however that doesn't seem to work.

How can a Perfmon "% Processor Time" counter be over 100%?

The counter, Process(sqlservr)\% Processor Time, is hovering around 300% on one of my database servers. This counter reflects the percent of total time SQL Server spent running on CPU (user mode + privilege mode). The book, Sql Server 2008 Internals and Troubleshooting, says that anything greater than 80% is a problem.

How is it possible for that counter to be over 100%?

  • There are two counters with the same name:

    Process\% Processor Time: The sum of processor time on each processor

    Processor(_Total)\% Processor Time: The total for all processors

    Your question indicates you're using the first counter, which means that its maximum value is 100% * (no of CPUs).

    So if you have 4 CPUs, then the total maximum is 400%, and 80% is actually (400 * 0.8 =) 320% (and for 8 CPUs it's 640%, etc etc)

    Bill Paetzke : Yep! That's it. I have 8 CPUs. So my sqlservr total CPU usage is actually about 37%. And that means everything is normal. Thanks, @Farseeker!
    Farseeker : Any time. I had similar queries when I first started using perfmon on multiple-cpu systems.
    From Farseeker

Remote running program from server

Hello, I have Windows serrver 2008 and more computers in it domain. all i need is to run program from server in all comps , for example install kaspersky for all comps. can i do it?

thanks

  • You may install programs using Group Policy Objects and MSI files of a program distribution. Deploying an MSI through GPO.

    You also have an option to deploy any remote control software e. g. UltraVNC or schedule programs execution using Task Scheduler.

    This question is better suited for serverfault.

Two NIC cards with the same metric - Internet traffic going out the wrong one

I have two NIC cards in my computer - one is connected to our corporate network and the Internet, the other is connected to a private LAN through a Linksys WRT54G. Both cards use DHCP.

This was never an issue with Windows XP, but with Windows Vista (and Windows 7) the metric for the 0.0.0.0 route is the same (20), and it appears that some network traffic that should go out my main network card are going out my secondary card instead.

The solution to date is to delete the 0.0.0.0 route associated with the second NIC card, but I have to do this several times a day.

Is there a better solution?

--Bruce

  • Yes. You can override the automatic metric calculation in the Advanced settings of the TCP/IP properties of each card. Use this to set which NIC you want to be preferred.

  • Do both cards use the same subnet or something? You say they both networks use DHCP, they really should be using different private ranges, otherwise you have a nonsensical network setup, if both cards have addresses in the same subnet then the machine will correctly assume they are the same network. If both networks use the same subnet the solution is to change the subnet one of the routers issues address in rather than butchering your network config. There are hundreds of private subnets to choose from after all.

    From Bart B
  • You probaly have to delete that route several times a day because your dhcp lease on the card that usually has the default route is for about 2 or 3 hours???

    I used to have a server with 2 network cards, one had a public ip address that can be contacted from the internet and the other card was plugged into the internal network. I found that every 2 hours, when the dhcp lease was renewed on the internal network card, it could change the routing i had set up until i started using "-p" at the end of the command which makes the routing permanent and you wont loose them not even after restarting.

    example: route add x.x.x.0 mask x.x.x.0 192.168.1.1 -p

    From Nico
  • Well ive had the same sort of problem as described but on windows XP, and it was solved by the automatic metric calculation answered, thanks kevin

    I will explain my setup and exactly how I solved it. My computer is connected to two networks, the first is a wifi card to a router for internet access, the second is by a wired network card attached to a hub. a temporary meassure put in place so I can configure a NAS box also attached to the hub.

    My predicament - not being able to not browse the intruction manual online whilst I configure the NAS box through its own web based interface! I set up the following subnets

    192.168.100.1 - wired to the NAS box through a hub

    192.168.200.1 - wifi to the internet through a router

    The effect was strange, sometimes I could browse a page other times it would just time out, clearly internet traffic was getting lost down the wrong subnet.

    Heres how to fix it..

    Open up a command prompt and type 'route print', you can then verfiy the 'metric' for each subnet your running, look at the lines where the Netmask shows as 0.0.0.0, take a note of the metrics, the wired network will most likely be '20' and the wireless '25', note: the lower value tells your computer to use to use that subnet over the other, certainly in the case of web browsing.

    Go ... Start menu > Control Panel > Network Connections > Open the Properties of the non-internet network > under the 'general' tab and from within the 'This connection uses the following items' list, select 'Internet Protocol (TCIP/IP)' and click the 'Properites' button > From under the 'General' tab click the 'Advanced' button > under the 'IP Settings' tab untick the 'Automatic metric' box and input into the 'Interface Metric:' field the higher of the two values collected from the 'route print' command.

    Repeat again for the internet enabled subnet, but this time entering the lower value, you can then verify the settings by going back and running the 'route print' command again.

    Hope the intructions helps somebody

    From mark

Installing Hyper-V Integration Components on Linux

Some big news this week was Microsoft released the Hyper-V integration components for Linux source code under the GPL v2.

I just installed Ubuntu Server 9.04 in a Hyper-V VM with a Legacy Network Adapter. How do I install the integration components? Do I have to wait until they are included in the kernel?

  • You can either wait for a distro-integrated kernel to include it, wait for someone in the community to build an appropriate kernel package (which probably won't take too long), or patch and build a kernel yourself. Unless you're familiar with the procedures for building a kernel and applying kernel patches (given that there'll likely be significant changes between the Ubuntu-released kernel and the bleeding edge kernel these patches are targeted at), I'd leave it alone and wait for someone else to do it. It won't be a trivial operation.

    sybreon : almost exactly as I would have put it.
    From womble
  • I found this in answer to another post on ServerFault (cross reference http://serverfault.com/questions/138110/ubuntu-10-04-server-on-hyper-v-server-r2-has-sluggish-install-and-command-line):

    http://blog.allanglesit.com/Blog/tabid/66/EntryId/53/Hyper-V-Guests-Ubuntu-10-04-Alpha-3-Synthetic-Devices.aspx

    In summary, the integration components are already part of the 2.6.32 Linux Kernel, at least in Ubuntu 10.04. Quoting:

    Add the following to /etc/initramfs-tools/modules

    hv_vmbus

    hv_storvsc

    hv_blkvsc

    hv_netvsc

    Generate a new initrd image

    update-initramfs –u

    make sure /etc/network/interfaces is pointed at the synthetic network adapter

    auto seth0

    iface seth0 inet dhcp

    It worked well for me to get the synthetic network adapter working with an Ubuntu 10.04 Server 64 bit guest operating system.