Thursday, February 3, 2011

Server 2003 answers ping, but wont serve http, ftp,smtp or pop3

After reboot, my server wont respond to any incoming request until it is rebooted again. Then, about 5-6 hrs later, any website on it will return a ping, but it will not serve the page, nor will it serve ftp, pop3 or smtp requests.

The System log shows W3SVC errors 1014 and 1074, which relate to an Application pool not replying; I have one phpAdmin app pool which I have stopped - it is showing a solitary website as the default App, but the server no longer serves php extensions, and I can't transfer the default website to another pool to kill the whole app pool.

I would appreciate your help.

  • Welcome to pool hell. If the application pool responsible for your sites isn't running... your site won't run. As far as the default site goes, you probably need to configure it to run the php extensions. If you don't really need the default site, you might simply need to stop it & start the correct sites. I've seen similar situations where 2 sites are configured to both accept all server names on the same port/ip.

    Manfred : I managed to get rid of the app pool problems ( I think) - no more log messages. The only app pool running now is the default app pool. php is gone. Still, at 6:40 am the server shutdown and re-started, and when it came up, it would not serve but it pings... I am puzzled
    From TheCompWiz
  • Just because the server is pinging doesn't mean that the requisite services are. (That just indicates that the server is running and the network stack is working.)

    Have you verified that IIS (which covers WWW and FTP) is running, as well as your SMTP service? You mention the IIS errors but don't mention whether those errors have stopped the service or not.

    Manfred : I am not located close tot he server, but it is sounding like I should take a trip to see it onsite the next time it hangs.
    gWaldo : If you can RDP into it, do that. Also, if you open an MMC under an account that has rights to that server, do so and open services and event viewer (or computer management to do it all at once) to check out the services and logs.
    gWaldo : run nmap against it to see what ports it _is_ listening on. That may help give you a clue what is working...
    From gWaldo

How to configure Postfix client relay to Exchange 2010 server

I'm getting

(delivery temporarily suspended: SASL authentication failed; server[] said: 535 5.7.3 Authentication unsuccessful)
when I try to relay mail from Postfix 2.5.5-1.1 on Debian Lenny box to Exchange 2010.

I think I tried all possible combinations but I'm definitely missing something. Here is relevant part of

broken_sasl_auth_clients = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_pix_workarounds =
smtp_sasl_type = cyrus
smtp_always_send_ehlo = yes
relayhost =

And I got libsasl2-modules installed. Anybody managed to successfully relay mail between Postfix and Exchange? Oh, and I already double-checked if password is right.

  • The Exchange Server will offer GSSAPI (Kerberos) but it seems that Cyrus SASL providing authentication service to Postfix was not configured to handle GSSAPI.

    man 5 postconf | less +/^smtp_sasl_mechanism_filter

    this will tell you what you need to set smtp_sasl_mechanism_filter to in order to get this to authenticate properly.

    helcim : So if I'm right i should have smtp_sasl_mechanism_filter = gssapi but then i'm getting postfix/smtp[3196]: warning:[]:25 offered no supported AUTH mechanisms: 'NTLM'
    Khushil : that would sugggest that the MS Exchange Server is setup with other auth mech - can you please check which auth is turned on with MS Exchange server or follow the guide at to setup NTLM(GSSAPI) with MS Exchange server.
    Khushil : you could also try smtp_sasl_mechanism_filter = !gssapi, !ntlm, static:rest
    From Khushil

Group Policy or reg hack to hide startup dialogs in XP

I have a system that I'd like to try and hide some dialogs on at startup. I've gotten rid of login, and welcome screen and all kinds of stuff, but there are still a couple of pesky dialogs I can't seem to hide. There is the "Loading your personal settings" dialog "Applying your personal settings" dialog.

Does anyone know how to hide them?

  • You could edit the (probably) gina.dll and change the graphics inside to be clear but you are probably asking for trouble at that point.

    You generally want those dialogs. Are you working with some xp embedded platform and are trying to do something stealthy?

    Oliver Nelson : It is for an embedded system, but running full XP...not trying to do anything stealthy, just wanted the startup to look as clean as possible. Client wouldn't accept my solution (which was to use a different OS entirely).
    Matt : Then you could probably lock down the theme, keeping the background colors the same. Then change the graphic (it should be in the gina) to match the background color. When you hack the gina you run the risk that a security patch in the future may overwrite your gina - but since a lot of vendors hack the gina these days (fingerprint logon, etc.) the rest of the vendors don't seem too concerned about it. Not exactly sure on the order, but writing your own shell may deal with that as well.
    Matt : they discuss that here, but you can run your program directly, or write your own windows program to only have the menus you require and lock them out of the rest of the OS. Pretty sure the local policies dialog you are seeing would still come up - almost 100% percent sure that's in the gina dll.
    Matt : Wow. If you wanna be hardcore you can write (or outsource) one.
    From Matt
  • Have you checked your verbose messages configuration?

    John Gardeniers : That article describes how to turn ON verbose messages, which are turned off by default. That's the reverse of what was asked.
    edusysadmin : Correct, an article on how to activate something can be helpful to check the current status of such a setting if one was not aware of it. I can think of plenty of senerios where an admin might not know all of the active policy configurations.

What Trouble Ticket system would you recommend?

Possible Duplicate:
What are some good web based trouble ticket systems?

Hi I am looking for a simple trouble ticketing system for my small software development company. It should be web based, preferrably an open source for self hosting, but cheap hosted service is okay too.

The main feature should be e-mail submitting of new tickets.

Thanks for helping.

EDIT: This is for customer support. For internal activities (projects, plans, bugs etc) we use Redmine.

  • Is this for tech support or for some sort of Bug Tracking?

    We use SysAid which suits us very well, and is free for our purposes.

    EDIT: SysAid is a web-based helpdesk application.

    For your purposes, I think SysAid would be very good for you to try anyways. And the new version comes out tomorrow I believe.

    twk : Does the customers sending requests by email require to have an previously created account?
  • For your consideration:

    From schultkl
  • <3 Tender . Any emails sent to it creates an "issue" (case). Depending on which email addy is used, you can create different buckets to drop the emails into. Users do not need accounts. You can allow users to set up accounts or, alternatively, set the whole thing to "Private" and they can only see the discussions they have created. Very slick.

  • We use Mantis Bug Tracker. It's a free online bug tracking system and it works very well for us. It emails you whenever you've got a new bug, etc. Give it a peek and see if it'll do what you need it to do (it's pretty flexible and easy to set up).

  • I had good success with Best Practical's RT:

  • What we did is we used our internal bug tracking system to handle external client requests via email as well. Perhaps Redmine supports this? For smaller companies it can be really nice to have it all in one place, because than you can easily convert client requests into tasks and assign them to team members. Just a thought...

    From jjriv
  • CodeSmith Insight is a help desk software with advanced application integration. It can handle your emails and user feedback, as well as your applications error and crash reports. Everything is in one place and you can reply to an error report just as easily as you can reply to an email.

Permission denied on /proc/ some process

I am using chkroot to scan my system and it gave me permission denied error

/proc/23746/fd/0: Permission denied
/proc/23746/fd/1: Permission denied
/proc/23746/fd/2: Permission denied
/proc/23746/fd/3: Permission denied
/proc/23746/fd/5: Permission denied
/proc/23746/fd/8: Permission denied
/proc/23746/fd/11: Permission denied

[/proc/23746/fd]# ls -liah
/bin/ls: cannot read symbolic link 0: Permission denied
/bin/ls: cannot read symbolic link 1: Permission denied
/bin/ls: cannot read symbolic link 2: Permission denied
/bin/ls: cannot read symbolic link 3: Permission denied
/bin/ls: cannot read symbolic link 5: Permission denied
/bin/ls: cannot read symbolic link 8: Permission denied
/bin/ls: cannot read symbolic link 11: Permission denied
total 0
1489109001 dr-x------ 2 root root  0 Oct  8 10:46 ./
1489108994 dr-xr-xr-x 4 root root  0 Oct  8 10:46 ../
1489141760 lrwx------ 1 root root 64 Oct  8 10:48 0
1489141761 lrwx------ 1 root root 64 Oct  8 10:48 1
1489141771 lr-x------ 1 root root 64 Oct  8 10:48 11
1489141762 lrwx------ 1 root root 64 Oct  8 10:48 2
1489141763 lrwx------ 1 root root 64 Oct  8 10:48 3
1489141765 lr-x------ 1 root root 64 Oct  8 10:48 5
1489141768 l-wx------ 1 root root 64 Oct  8 10:48 8

What should i do fix them

  • That's okay: you can't access resources of processes that are not yours (unless you're root)

    Master : i am logged as root
    From o_O Tync

How to sync email with iPhone

A client of mine has an iPhone. We have verified that he has the correct settings input in his iPhone for his Exchange account. Other users have iPhones as well and are able to get email on their device. His username, password, server, email address, server are all correct. SSL is on which it should be. When setting up the account the iPhone accepts it but when he goes to check his mail a warning pops up stating that he is not able to connect to the Exchange server. Any ideas on where to look to solve this?

  • Make sure that you're on Wifi and not the Cell networks.

    Ensure that his Exchange server is reachable by the iPhone. Since you don't have a command-line to ping from, try accessing OWA (Outlook Web Access) by going to http://[mailserver]/exchange/

    From gWaldo
  • Have you tried setting his phone up with another user's account and setting his account up on another user's phone? This should tell you if it's a phone or account issue.

    Jason T. : Well I tried setting his account up on my iPhone and the same thing happened. That's why I'm starting to think it's an issue with his account but I can't pinpoint it.
    Chopper3 : just get your exchange admin to compare accounts - tell him that you've tried what you just did ok, it'll help.
    joeqwerty : In that case I'd look at the Exchange Features tab of the user account properties and make sure that Mobile Services are enabled.
    Jason T. : Thanks for the tip. Although after looking for that I cannot seem to find it. In fact, there is no Exchange Features tab for that user in AD. Nor does any other user have that tab.
    Chopper3 : just for sanity's sake try this problem user and a working user on another phone - just to be sure of your belief that it's a user issue ok. Also why not add a comment here showing the various config details (minus passwords of course) of the bad and good users - plus any other config details of the phones too - have you compared the data service APN settings etc.?
    From Chopper3
  • Found the solution, at least for my case. In Active Directory went into the users properties, security tab, clicked on SELF, then advanced and checked the box to allow for inheritable permissions from the parent. After that he was able to sync up.

    Kara Marfia : Thanks for coming back with the fix. I love the solutions which don't seem like they should've changed anything...
    Jason T. : Yeah, I'm not sure how that setting was changed. Checked other users' accounts and that was checked. :-\
    From Jason T.

What options i have if my linux box is compromised

I have put all security meaures and log monitoring tools. Now but i don't know what to do if find out the there is some rootkit on my system.

If i have live sites running on my system and i can't turn off the server as well. How can i remove the infection

Regarding backups should i do backups of whole linux system or just the public_html directory and database. because i am currently backing up only those folders.

I have VPS and my hosting company is taking the daily snapshots but what is the other way to be safe so that if i some rootkit infection then i can recover it.

  • If your web server has a virus, the only safe thing to do is nuke in from outer space. That's right, put it onto the next space shuttle mission and make sure it's jettisoned far enough away from earth that we don't all get showered in EMP or fallout, and press the red button that makes it explode.

    If that's unfeasable, too expensive, or you your local shopping centre has run out of nuclear bombs, then the only other way to make sure any virus is gone is to format the server. Your hosting provider may be able to assist you with this by setting up a 2nd VPS and giving you a month or so to move everything over before shutting down and deleting the current instance. Of course, if you just migrate everything over indescriminately from the old VPS to the new VPS then you'll likely bring the virus with you.

    If you have customer data on there and there's a risk you're leaking that data or taking part in a botnet or a backdoor has been left in the system, then you have an obligation to your clients to do everything in your power, and simply scanning/removing any known virus isn't really enough because you just never know what they've left behind.

    Regarding the backups, I would say you're doing the right thing, because you shouldn't have execute permissions on anything the public_html folder and the database is unlikely to be harbouring anything malicious.

    Master : IF i get the new VPS and i need to make the all the websites running very soon which directory backups you think i need to do that so that all sites get running in least possible time. I mean all user accounts , apache setting , home directories , MX records , Canel whm setting
    symcbean : Good answer - except the bit about 'virus' - yes there's malware on Unix systems - but no virus in the wild since the Morris worm.
    symcbean : @Master: the most common cause of system intrusions is insecure CGI code. Once an attacker finds a whole, they usually install further backdoros to get access - so by restoring your website you could well be restoring the route in to compromise the system.
    Master : @symcbean , then what is the best way to restore bckups. how to find that websitecode is compromised , i mean what things to look foror how to scan that
    Cypher : @master: what symcbean is saying is that you most likely have a hole in your web code: be it cgi, php, perl or some bundled package, such as phpmyadmin, mysql, or other popular control panels/software tools. while restoring your data onto a clean system is a good thing, you should be looking for holes in your code that the attacker may have used to gain access to your system in the first place, or this will just happen all over again. there are other ways for attackers to gain access: bad passwords, plain ftp, unpached software (apache,mysql,phpmyadmin,control panels,drupal,wordpress,etc).
    From Farseeker
  • I would suggest you keep backups of just your data: your site files and your database data. Keep those backups off the system. If you get infected or the server gets compromised, you can have your host "initialize" the server (reset to original state), you copy up your data and you are back online with a clean server.

    I also suggest that you sweep your system and find out how you got infected/compromised in the first place and implement measures to prevent that from happening again.

    From Cypher
  • Backups, backups, backups. Sadly, there is no way to be sure that the rootkit is gone without formatting and restoring from backup. I would keep backups of the data directories of each of your services (webserver, so /home/*/public_html and /var/www; MySQL, probably /var/lib/mysql, read up on each service you use to find where the files are stored) and a backup of your configuration (/etc), and any local changes you've made to the system and home directories (/home/*, /usr/local/*) at bare minimum.

    To further elaborate on potential rootkits, once they obtain root priviledges, it is possible for them to mask every sign that they exist on the infected system.

    symcbean : -1: as commented elsewhere, most systems are compromised by bad website code. Having the bad website code on a tape ready to install on top of a clean machine is a very temporary fix. Usually rootkits are installed **after** the system has been compromised.

Split Brain DNS and DNS forwarding

Hi All

This maybe unusual question but I would like to find out if this is possible.

We have several security zones behind firewall, let's call them LAN, DMZ and Backend.

There is a DNS server (bind, servername is in DMZ zone, set as split DNS.I.e. it resolves public addresses to the request made from the Internet and private NATed addresses for same domain to the requests coming from the LAN and Backend.

It all works fine, however now I am introducing Windows 2008 AD into the Backend as server base grows and managing SAM databases is not an option anymore.Windows domain name is DOMAIN.COM.I realise that this may be confusing setup but this is done to keep things simple in the naming department.
Naturally this requires using Windows DNS which is on the same AD.DOMAIN.COM server.
DNS zones on this server work fine and I have set up a forwarder for for any internet related queries.
Now the question. If I want to resolve host located in the DMZ NATed subnet from th windows host in the Backend(i.e. use internal part of the split brain DMZ) , how do I make sure that requests for whatever_is_not_in_windows_domain.com_zone" are forwarded to the internal split brain DMZ?Is it possible at all? I realise that I can hardcode them into the windows dns server zone, but this looks like a workaround, not a solution...
Hopefully I was clear enough :)

  • I don't think this is possible, AD.DOMAIN.COM believes it is the authoritative source for this domain and will respond with NXDOMAIN no matter what.
    I would really advice you to create a subdomain to put your AD into. As your setup grows this will become a bigger problem and manually adding hosts to both zones doesn't seem like a nice task.

    It would be possible to run a Active Directory with a BIND DNS server.
    What you could do is merge the zones and allow updates from the AD.DOMAIN.COM server.
    However this requires the DOMAIN.COM zone to be a dynamic zone.

    Sergei : This is an interesting idea.Where can I read more about it?
    faker : The only non-obvious option you will need to set in your zone is "check-names ignore;".
    Evan Anderson : +1 - Naming your Active Directory domain "" is a mistake. You're going against Microsoft-recommended best practices. Running BIND for your DNS for the Active Directory domain is an option, but you still have an ugly situation. Be aware that, in order for Group Policy to work properly, the "" name must resolve to the IP address of the domain controller computer(s), for example.
    Sergei : Thank you Evan, what is exactly done against best practises?What is solution - have a subdomain?
    Sergei : Or totally different domain like domain.corp?
    From faker
  • I don't think this is possible. The backend Windows dns server is authoratative for and therefore won't forward requests for to another dns server. I think your only choice is to add static entries for the DMZ machines into the backend zone.

    From joeqwerty

Can anyone help me make my reverse proxy actually cache?

Hi folks,

I'm trying to configure a Reverse Caching Proxy but so far have had no luck. I would preferrably like to use apache (that will be all it will be used for), but am open to solutions using other software that can also run on Mac OS X 10.6 (I have also tried using Varnish and Squid, but with no more luck).

We're running a system with about 80 mac mini clients that will be requesting lots of video from a server. To reduce load, we thought we could use Apache (which comes on the macs by default) to cache this video forever (or at least as long as possible) onto the macs' disks.

I have managed to get a reverse proxy set up with apache using ProxyPass etc, but when i tried to add CacheEnable disk / to the configuration, nothing happened (i do have mod_disk_cache included).

Can anyone help with my issue? The apache config file is here

Thanks in advance

Edit: So far I have been testing it with smaller text files, and it hasn't been caching properly. This suggests it is nothing to do with us actually downloading video, but actually to do with the cache configuration.

  • requesting lots of video

    That's a rather loaded statement. How? The products you've mentioned all are HTTP related - if you're having problems with caching and HTTP and video, it makes me think that the clients may be using progressive download. Is that the case? IIRC squid and possibly lots of other proxies have trouble correlating range requests with fully cached content.

    You might want to consider serving up the content using a slimmer webserver (nginx?) for the static content. Note that as per my comment here the OS disk cache will be the most efficient place to cache the content.

    Lenary : i have updated the question with how i've been testing this - i have just been requesting small text files and yet it is not caching them.
    symcbean : Show us the headers in the request and response (NB fixing the caching for small files probably won't resolve the pseudo streaming issue)
    From symcbean

Build an Internet / gaming cafe

Over the past few days I have had this strange idea in my head to own and operate an Internet cafe and gaming center. This is not aimed to be a post discussing the pros and cons of the business or anything like that.

What I am actually interested in learning from the community is if YOU were planning the infrastructure, how would YOU do it? This includes computer hardware, monitors, network hardware, and possibly floor planning as well. Floor planning could also involve non-hardware layout such as extra furniture if you wanted to set up a Wi-Fi lounge.

To make it more fun, I will list a series of options so that you could choose a specific "path" to take in your planning of your own Internet-cafe.


  • Simple Internet Cafe
  • Internet Cafe and Gaming Center (PC only)
  • Internet Cafe and Gaming Center (PC + Console)
  • Internet Cafe and Lounge
  • Gaming Center and Lounge
  • Custom Style (whatever you want)
  • Building is the easy part. Maintaining will be the challenge. Real gamer types will find it hard to play on commodity keyboards, mice, and controllers. But when I've shelled out thousands for the good stuff, I'll find it hard to handle users destroying them.

    I also haven't figured out how to handle those overstaying their welcome. If there are no time limits, the stations will all be taken by local kids who spend the least and take the most.

    The business plan may need to factor in a time = money model, either by direct pay as you go, moral obligation to buy a beverage/snack, or maybe indirect temptation to buy something that looks good while you're there.

    Sorry, I don't have answers. I only have problems. :)

    TheTXI : All of your points are very good. In general I was looking to see what type of equipment people would buy (which is very important depending on what type of cafe you want to run. If you simply want high volume of users browsing the internet, cheaper pcs would be in order, but if you were looking at going into PC gaming you would want to have much better monitors, higher end graphics and processors, etc.
    spoulson : Yeah, but these risk factors will drive what you buy or it'll be a hard lesson when you have to redo it all at a loss.
    From spoulson
  • If you have areas for people with laptops don't forgot the outlets!!! The local starbucks has only 4 outlets that are accessible. As a result I found a coffee shop that has plenty of outlets.

    From epochwolf
  • I think you should be mostly concerned about security. Anyone can drop a few thousand dollars on computer equipment and connections.

    But what to do about possible trojans, sniffers, theft, etc...?

    There are some commercial solutions to these problems, but this is definitely a hazard area.

    From Unknown
  • One of the challenges will be keeping the computers secure from a hardware and software perspective.

    Depending on the degree of supervision your hardware will have, I would expect that there would have to be a varying degree of physical security for your machines. Some questions to ask would be:

    • Will there be a security camera?
    • Is the place small enough and have ample supervision?
    • Are there any blind spots?

    If the hardware is at risk of theft, it may become necessary to physically lock down the hardware with locks and cut-proof wiring to anchor them to furniture.

    In terms of software security, again there are some points to consider:

    • Are machines from the outside allowed to connect to the LAN?
    • Are removable storage devices going to be allowed to connect to the machines?

    If machines or storage devices from the outside is allowed onto the local network, this can be a breeding ground for viruses and worms. It would probably be a good idea to have up-to-date antivirus software (with automatic update) on all machines and firewalls with sensible settings for only allowing certain ports to pass.

    Not to mention the need for security from the outside world, i.e. the Internet. This is something that can't be ignored, but probably could be performed by having an outward facing firewall that all systems can go through.

    Security products which can be centrally controlled will probably come in handy when working with more than a few machines. Considering labor is going to cost money (and yes, your time is also worth money as well), so investing in some system with centralized control will probably make your life much easier.

    If the systems are running Windows, I think that Windows Firewall settings can be controlled by Group Policy Objects on Active Directory. I'm aware that there are enterprise security products which allow central control of security software, but again, these two suggestions may be expensive depending on the scale of machines on a network. Again, weigh the costs and benefits.

    Also, one additional risk to consider in the security is the danger of spreading viruses and worms through the LAN. This may become a liability if people complain that their machines was infected and caused problems. Perhaps having the patrons sign a waiver would get around this, but I'm not a lawyer, so it would be a good idea to consult a lawyer for legal advice.

    From coobird
  • From the security perspective, here is something to consider.

    Whilst I was in Japan, I noticed that all the computers in the school were 'reimaged' every time at boot, even before Windows started to boot. I am not sure if this was done over the network or if there was a static image the computer itself, but I thought the idea that the maximum amount of damage a user can do (to the software) is easily fixable by a restart. This could be mandated by having the computer automatically restart every time a user is finished with the computer.

    The closest solution I found to this was Deep Freeze by Faronics.

    From joshhunt
  • -joshhunt...check out Windows Steady State

  • To play up the Gaming aspect, I would invest in network cards from Bigfoot Networks. If you can provide the best experience possible, people will come back.

    Tom O'Connor : Is there actually any performance benefit from these, say over Intel or Broadcom server platform cards? Or is it just gold-plated snake-oil, like the high-end hifi market is with their cables. That said, why not go the whole hog, and get gold-plated CAT7 cables.
    From Joseph
  • I have wondered about this myself and it is hard to perfectly make use of the housing, being attractive to gamers while maintaining an attractive 'café' appeal so the tourists won't run away.

    You have two very specific needs here. The gamers will need at least medium-grade gaming rigs while your 'café' type users couldn't care less about the hardware they use. They need email, browsing and messaging.

    For the café types, I would just invest in Atom based mini computers and focus on the 'café' appeal in that specific area, this means more lightning that the gamers are used to.

    For the gamers I would avoid always buying the latest hardware, it simply doesn't generate enough revenue to be affordable. Try medium-range stuff.

    Obviously, you don't have the time or money to keep installing the operating system from the ground-up whenever somebody breaks something. So un-attended installs and ghosting are your friend (ghost or partition image f.e.).

    On the café machines I'd just install some light weight Linux window environment, like XFCE.

    Final words
    This is a tricky business because these two customer groups are extremely different in many ways. Gamers like to yell at each other while bouncing rockets in Quake Arena while tourists like their peace and quiet while browsing their email, blogging, etc. Gamers prefer dark areas where the glare doesn't mess with their 'fragging skills'. Tourists like open, bright areas.

    The only thing these two groups have in common is that both groups will likely buy a lot of caffeine and food.

    From Andrioid
  • Build machines that are able to handle the latest games. Don't skimp on the video cards. Use software that will skin Vista (gotta have DX10) to prevent low level access, only access to the games. I would be curious as to how many gamers want coffee when they are playing though.

    Keith : by the way, one of my clients is a large gaming company with a unique twist. their pc gaming side has about 25 machines running smartlaunch. they also offer wifi. around the pc's they have about a dozen current gen consoles on samsung lcd tv's and a "stage" set up for rockband/guitar hero.
    From Keith
    • Rent out hard disk space / provide off-site backup --- I have a couple of friends who take serious amounts of photos. They can't back them up onto the cloud, the internet's too slow. Much better if they can walk in with an usb hard drive and copy it over the local network

    Assuming basic internet cafe:

    • I'd want the client machines to boot from the network and not have a hard disk
    • I'd look into having multiple consoles (monitor+keyboard+mouse) running off a single computer --- this might work out cheaper than giving each console its own computer

How to configure multiple domains pointing to different directories on a server

I've been given a new domain and I want to point it to a specific directory on my server: var/www/specific/path

What are the steps for doing it ? Could you suggest me a tutorial ?

I need to repeat the same operations for several domains.



NameVirtualHost a.b.c.d //I've hidden my real ip

    DocumentRoot /var/www/sites/website
    CustomLog /var/log/apache2/ combined
  • I'm assuming you want to host multiple sites on a single IP address and that you're using apache. If so, then essentially, the following lines of code in my httpd.conf do the job for me:

    NameVirtualHost a.b.c.d:80
        DocumentRoot /path/to/site1/documentroot
        CustomLog /usr/local/apache/logs/ combined
        DocumentRoot /path/to/site2/documentroot
        CustomLog /usr/local/apache/logs/ combined

    Where a.b.c.d is your server's IP address. Repeat VirtualHost declarations as you need them. The CustomLog declaration isn't vital, but it helps me to keep my sites' logs separately from one another.

    You can find more documentation on apache at; click on the version of your apache server, then enter (eg) NameVirtualHost in the search box (though the doco for that can also be found directly at - it's not a feature that changes much between versions).

    MadHatter : Sorry, I should have added that the apache project has a very nice piece of doco on all of this at
    Patrick : @MadHatter Sorry for the delay, they finally transfer the domain, so I can only test now. Shortly, I cannot make it work, the domain doesn't lead to my website. I've updated the question with my current configuration. What am I missing ?
    Patrick : Also, this is what I get when I do add the NameVirtualHost line: [Thu Oct 07 16:51:46 2010] [warn] NameVirtualHost a.b.c.d:80 has no VirtualHosts
    MadHatter : Could you confirm that, both on the machine in question, and on another random machine, when you nslookup, you get a.b.c.d ?
    Patrick : @MadHatter I actually get another ip. Wow. But I don't get why, I bought a VPS slice and they assigned me a specific ip. Then I transferred the domain, and they just assigned it to another ip ?!
    Patrick : Is maybe some additional step I have to do before to configure Apache ?
    MadHatter : Get the DNS working. When apache starts up and sees VirtualHosts, it resolves each of the declared names to decide which IP to put it on. In your case, it'll be getting a different address from that declared in NameVirtualHosts (which probably explains the "NameVirtualHost a.b.c.d:80 has no VirtualHosts" error). You will not get this working without either self-consistent DNS or a hack; try for the former, it's less painful in the long run, and in any case you'll have to get it right before the rest of the world sees your site.
    From MadHatter

Exim Problem: Sender address rejected: need fully-qualified address

My mail log returns the following error when sending email to a gmail account: Sender address rejected: need fully-qualified address

Here is the full error message:
2010-10-08 03:44:58 1P4214-0007MM-NL <= alleart@V100723TU7C41-1 U=alleart P=local S=527 2010-10-08 03:44:58 1P4214-0007MM-NL ** R=smart_route T=remote_smtp: SMTP error from remote mail server after RCPT TO:<>: host []: 504 5.5.2 <alleart@V100723TU7C41-1>: Sender address rejected: need fully-qualified address 2010-10-08 03:44:58 1P4214-0007MP-Rm <= <> R=1P4214-0007MM-NL U=mailnull P=local S=1556 2010-10-08 03:44:59 1P4214-0007MM-NL Completed

Exim is set to the folowing relay:

driver = manualroute
domains = !+local_domains
transport = remote_smtp
route_list = *

The server is running CentOS and Exim 4, the email is sent using PHPs mail() function.

Thank you for your time and effort

  • It looks like this is the address the server is trying to use to send mail: alleart@V100723TU7C41-1 and it's being rejected. You may need to specify a valid email address in your php.ini file, or request a change through your hosting provider.

    John : Hmm yes V100723TU7C41-1 shows as hostname in SSH. Do you know how I can make it dynamic for each domain on the server?
    Force Flow : The address needs to point to a valid SMTP server. Do you have qmail installed on the server? Can you point to an external SMTP server or mail relay?
    John : The server uses mail relay. Dont think I have qmail, its running Exim. Details of my mail configuration can be found in my other question
    John : I only needed to change the server name from something else than the default generic one. Thank you for your help mr Force Flow or is it Clark Kent
    From Force Flow

Where to get

I'm trying to set up opsview (Nagios) on a CentOS 5 server running perl 5.8.9

When I try to start it, it can't find Turns out, neither can I. It's not on CPAN and I've been unable to determine what package would provide it. yum provides "*/" doesn't return any results.

Edit: so we've established that it should come with the perl-rrdtool package, but unfortunately hasn't. Where do I go from here?

  • You have to install rrdtool, CentOS doesn't provide this package by default but you can use Dag Wiers' repository

    $ cd /etc/yum.repos.d
    $ vim dag.repo

    insert the following lines:

    name=Dag RPM Repository

    and :wq (save) the file. After this, just install the package via yum.

    $ yum install rrdtool
    bemace : Unfortunately I already had `rrdtool-1.4.4-1.el5.rf.i386` installed from the CentOS repository, so that's not it either
    From Sascha
  • should be provided by perl-rrdtool, but you indicate that you've already installed this program.

    Your script can't find, but may still be installed on your system, just not in a place where PERL expects to find it.

    What do one of these commands tell you?

    (You might need to update the locate database first, with /etc/cron.daily/mlocate or a similar cron command)



    find / -type f -name
    bemace : I did find an old in under the perl 5.8.8 libs, but after copying it into the 5.8.9 tree it jsut segfaults.
    Stefan Lasiewski : I pointed you to the i386 files (I corrected the link). You might have the x86_64 architecture. Make sure you are downloading the correct RPM for your architecture:
    Stefan Lasiewski : CentOS 5.5 ships with Perl 5.8.8 by default. If you are using Perl 5.8.9, then it sounds like you are starting to go the custom route. A simpler solution would be to stick with the default version of Perl, unless you really need a newer version of Perl.
    bemace : This time you've hit it. Running `strings` against the rpm it appears to be hardcoded to install under perl 5.8.8. I've reverted to 5.8.8 and it's now working! Hopefully there wasn't a good reason I'd upgraded to 5.8.9

IP addressing issue

I am not bad with networks, however, I am doing badly in IP addressing. It is affecting me very much. For example, while working on access lists on a Cisco router, I had to assign a serial port the IP address and a subnet I thought the corresponding subnet would be, because is a Class A IP address.

How may I improve my IP addressing skills?

  • When you are using a custom subnet you are using classless ip addressing. This allows a large block of IP addresses, like those in a class a network to be sliced up into smaller networks. You do this for a variety of reasons. Do a Google search for Subnetting Tutorial and you will find a ton of resources. Cisco's site has some very good games on how to perform subnetting as well.

    jscott : Why not just provide @user56454 a link to SF's very own hit question: [How does Subnetting Work?](
    Robert Kaucher : Oooh. That is quite good. I actually did not know about it. Stored for future use.
  • is an example of a variable length subnet mask, which is used with CIDR, or classless inter-domain routing. CIDR is a way to split Class-A/B/C networks into smaller subnetworks where you don't need say, a full 254 addresses (or 16 million, in the case of a Class-A).

    In this case, is a mask for a network of 14 hosts. A tool like whatmask is very helpful here:

    $ whatmask
    IP Entered = ..................:
    CIDR = ........................: /28
    Netmask = .....................:
    Netmask (hex) = ...............: 0xfffffff0
    Wildcard Bits = ...............:
    Network Address = .............:
    Broadcast Address = ...........:
    Usable IP Addresses = .........: 14
    First Usable IP Address = .....:
    Last Usable IP Address = ......:

    CIDR is useful in cases where you don't need a full Class-C network (254 addresses) (or something even larger). If you only have a network of a dozen or so hosts it's a more efficient use of IP address space.

    Hope that helps.

  • The notion of class is similar to subnets. The difference is that a class block is assigned to a particular compagny or person. In that case the ip block 24.x.x.x were given to an compagny that decide to divide it in multiple part. Maybe the compagny sold a little part of their IP adress space. The class notation is not really used in our days.

    Instead we use netmask. Netmask gives helps a router separate the subnet-part and the host-part. How? It's simple all bits set to 1 is part of the subnet address and the ones set to zero is the host address. The router just have to perform an bitwise and with the netmask and the IP address to retrive subnet address. Another way to write the netmask is by appending a /x to a base subnet address i.e.

    A subnet is a group of hosts in the same network as in there is no routers that has to be contact to esthablish a connection between two hosts in the same subnet. So you want to assigned the same subnet address to those hosts. There is two adresses that are taking when yo form a subnet the first on and the last one. The first address is to designate the subnet itself, it is not an usable address and anything sent to that address will be discarded. The last address of a subnet is the broadcast address is something is sent to that address it will be received by all the host on the subnet.

    If we take an subnet like (or netmask we can have host from to so we can set 254 ip addresses. We used to designate the subnet and would be the broadcast address.

    If we look at the possible subnet we have:

    /32 ( Only one usable ip address (you don't have a subnet address nor a broadcast). It is used in point-to-point connection

    /31 ( Is quite a bit unuseful because most of routers sets a subnet address and a broadcast so you don't have any other host. But if your router follows the exeption to that rule you have two IP possible.

    /32 ( Is more useful you have one subnet address, two host IP and the broadcast.

    /x: You can continue to calculate what it would give as number of hosts possible on a network with the rule: 2^(32-x)-2

    /0 ( Is not possible because you need a subnet part.

    If apply these rule to the internet itself we can think of some weird things.

    The IP would designate the subnet of all subnet so the whole internet. It fact it points to nothing just like the first address of a subnet.

    The IP would designate the broadcast for all internet hosts but in fact it is reduce to the local subnet (for obvious security reason). Host use it when configuring their IP with the DHCP protocol.

    So in ending I will explain your own example: is your IP address. your netmask (so it's a /28) We have 16 possible IP addresses, so if we remove the 2 reserved we have 14 possible host. Then is the subnet address. And the broadcast address. Range is usabla hosts addresses.

    Hope it was interesting and useful.

    From Gopoi
  • Here is a section from a Cisco class I am taking (note the bolded text near the end).

    Historic Network Classes

    Historically, RFC1700 grouped the unicast ranges into specific sizes called class A, class B, and class C addresses. It also defined class D (multicast) and class E (experimental) addresses, as previously presented.

    The unicast address classes A, B, and C defined specifically-sized networks as well as specific address blocks for these networks, as shown in the figure. A company or organization was assigned an entire class A, class B, or class C address block. This use of address space is referred to as classful addressing.

    Class A Blocks

    A class A address block was designed to support extremely large networks with more than 16 million host addresses. Class A IPv4 addresses used a fixed /8 prefix with the first octet to indicate the network address. The remaining three octets were used for host addresses.

    To reserve address space for the remaining address classes, all class A addresses required that the most significant bit of the high-order octet be a zero. This meant that there were only 128 possible class A networks, /8 to /8, before taking out the reserved address blocks. Even though the class A addresses reserved one-half of the address space, because of their limit of 128 networks, they could only be allocated to approximately 120 companies or organizations.

    Class B Blocks

    Class B address space was designed to support the needs of moderate to large size networks with more than 65,000 hosts. A class B IP address used the two high-order octets to indicate the network address. The other two octets specified host addresses. As with class A, address space for the remaining address classes needed to be reserved.

    For class B addresses, the most significant two bits of the high-order octet were 10. This restricted the address block for class B to /16 to /16. Class B had slightly more efficient allocation of addresses than class A because it equally divided 25% of the total IPv4 address space among approximately 16,000 networks.

    Class C Blocks

    The class C address space was the most commonly available of the historic address classes. This address space was intended to provide addresses for small networks with a maximum of 254 hosts.

    Class C address blocks used a /24 prefix. This meant that a class C network used only the last octet as host addresses with the three high-order octets used to indicate the network address.

    Class C address blocks set aside address space for class D (multicast) and class E (experimental) by using a fixed value of 110 for the three most significant bits of the high-order octet. This restricted the address block for class C to /16 to /16. Although it occupied only 12.5% of the total IPv4 address space, it could provide addresses to 2 million networks.

    Limits to the Class-based System

    Not all organizations' requirements fit well into one of these three classes. Classful allocation of address space often wasted many addresses, which exhausted the availability of IPv4 addresses. For example, a company that had a network with 260 hosts would need to be given a class B address with more than 65,000 addresses.

    Even though this classful system was all but abandoned in the late 1990s, you will see remnants of it in networks today. For example, when you assign an IPv4 address to a computer, the operating system examines the address being assigned to determine if this address is a class A, class B, or class C. The operating system then assumes the prefix used by that class and makes the appropriate subnet mask assignment.

    Another example is the assumption of the mask by some routing protocols. When some routing protocols receive an advertised route, it may assume the prefix length based on the class of the address.

    Classless Addressing

    The system that we currently use is referred to as classless addressing. With the classless system, address blocks appropriate to the number of hosts are assigned to companies or organizations without regard to the unicast class.

    From dbasnett

Choosing the right SQL server replication type

This is the scenario:

  • 1 x SQL Server 2000 database running at remote site (Publisher)
  • Website running at a different site requires mostly read and some write access (<5 tables) to the database
  • Cross Network bandwidth utilization must be kept to a minimum

We would like to use SQL Server replication for this, with a subscriber at the location as the web server. Is merge replication the most appropriate replication type for this?

I did consider Transactional Replication with Updating subscriptions but according to technet, this is being discontinued in the next release of SQL Server.

We would like the replication to happen as real time as possible but network utilization is a consideration.

Thanks Ben

  • You'll want to use Merge. That'll be the best option when you need to do updates on both sites.

    You might want to upgrade to SQL 2005 or newer as SQL 2000 isn't a supported version any more.

    Ben : @mrdenny - is it just that 2000 isn't supported or is 2005+ a prerequisite? (note that the subscriber will be 2005+).
    Farseeker : There's a caveat to using Merge replication though, and that's your app needs to have been designed fairly well, in that insert statements without explicitly specified columns will fail because of the `rowguid` column that gets inserted into the table. I strongly recommend setting up a 2nd database, turning on Merge Replication and testing your app against it before you do it to your live scenario.
    mrdenny : @Ben Merge replication was available in SQL 2000, but it was improved in SQL 2005 and up a lot (as was most everything else as well). To pay careful attention to what @Farseeker posted about the new column being added to every table. Test a ton before trying this in production.
    From mrdenny