Friday, January 14, 2011

skype network monitoring

Can Skype chats be monitored/read by a internal network or sysadmin? Can I company store chats you make in skype and read them? I can't find a clear answer for this on the net. Also, are chat history files in skype encrypted/able to be read by someone without your skype login?

I hear skype is encrypted by default, and all chats and calls are protected, but I have heard people be told that "skype logs are kept and monitored" by their employer. Is this possible?

  • According to some independent reviews, skype seemed to be secure circa 2005. You might assume that the government could get it if they wanted to.

    Google says China is monitoring.

    grawity : China has its own version of Skype.
    From Sam
  • There is a paper from the blackhat conference at http://www.blackhat.com/presentations/bh-europe-06/bh-eu-06-biondi/bh-eu-06-biondi-up.pdf which includes some interesting insights into the Skype protocol and the software itself.

    Maybe this gives you an overview of the problems with monitoring Skype traffic at network level.

  • Speculation aside, are you looking for what you can do in house or worried about what the government can do?

    If its in house so some way that you can monitor what employee's are saying over it, I've never been able to find anything that will do this. It also seems you must be logged into Skype to see chat history from the computer.

    So the only ways to monitor would be a screen monitor or key logger. Or block Skype due to security concerns. But then people can use any other number of secure chat programs as well.

    Skype is default like this but people can setup any number of other clients as well that are also secure and can't be monitor so if Skype is a concern due to lack of monitor you need to block all chat programs.

  • The initial problem of detecting Skype is in itself a rather difficult one.
    Skype, is it right for you? has description and paper references to the problem.
    Final notes from that link (which is a good read).

    So, the final questions, is Skype Spyware?
    In my opinion No. It does not contain spyware and never has.

    Is Skype useful?
    In my opinion Yes.

    Is Skype beneficial to my environment?
    Is it? That’s a determination that only you can make.
    Do your clients sit behind NAT, Firewalls, and/or proxies?
    Then they won’t be supernodes or Relay nodes. They are just clients.

    Do you have a requirement to monitor all IM, file transfers, and/or voice calls?
    If so, Skype is hard. It’s encrypted.

    Once you figure out Skype is active, intercepting the traffic is a much more complicated problem.

    From nik

Rewrite outgoing mail for Exchange and virtual domains

I have multiple domains that a single exchange install hosts mail for. Here is the scenario:

matt@domainA.com info@domainB.com matt@real_company_domain.com

The matt@domainA.com and info@domainB.com are virtual addresses that go to matt@real_company_domain.com

Is it possible to configure exchange to rewrite the outgoing From: so that it comes from whichever domain the original mail was sent to? So if a user emails matt@domainA.com, if I respond right now, it will come from matt@real_company_domain.com. I'd like Exchange to automatically rewrite it to From: matt@domainA.com. If someone emails info@domainB.com, the response is rewritten From: info@domainB.com

I hope that's clear. Is this at all possible? Thanks!

  • There's no automated mechanism in any version of Exchange to do what you're looking for if you're determined to bring email for all those addresses into a single mailbox. You're going to be stuck creating "Contact" objects with those alternative addresses assigned to them, configured to deliver incoming to the "real" mailbox. Then, you'll need to grant "Send As" permission on those contact objects to the user of the "real" mailbox who will then have to control the "From:" address in Outlook by choosing the appropriate contact from which to send replies.

    There has never been a good "story" from Microsoft to do what you're trying to do with any version of Outlook / Exchange.

  • Use Choose From

    From Jim B

How do you firewall Linux so that unprivileged accounts can only access the web?

I have a Debian server that allows users to log in. I don't mind them accessing the web or downloading files, but I want to otherwise restrict their internet access from that machine. How should I set up my IPTABLES or other firewall to make this work easily?

  • I would suspect you would simply block all inbound and outbound ports for the host except for ports 22 (ssh) and 80 (web). If you're using this computer for your own as well as helping out some friends learn, and require things like email, instant messeging, etc I would recommend creating a special group just for them that can only access a specific list of applications. I think you may need to specify if this is a stand alone server or a workstation for you + a server for them.

    From bobby
  • This is actually extremely tricky from a technical perspective (the network layer doesn't usually know anything about users; there is no "user" field in a network packet).

    But, Linux, being totally awesome, does have a solution for you. You'll need the iptables "owner" module, and rules along the lines of this:

    iptables -A OUTPUT -o eth0 -p tcp --dport 80 -j ACCEPT
    iptables -A OUTPUT -o eth0 -m owner --uid-owner 500 -j DROP
    

    Where "500" is the UID of the user you'd like to block from hitting the net. The first rule just allows all outbound port 80 traffic.

    You probably need to load the owner module before this will work:

    modprobe ipt_owner

    So, add that to your rc.local file, or similar. This assumes, of course, that your system has this module installed. I don't know what package provides it on Debian. It might be in the standard iptables package.

    MarkR : Yes, the owner module can do it. You could also block outbound traffic from processes whose group id is not some group you've authorised.
    David : The owner module only works for locally generated packets, not forwarded packets. So it won't do what your claiming.
    swelljoe : OP is talking about locally generated packets, isn't he? He said "I have a Debian server that allows users to log in". I assumed that meant the users are local.
    From swelljoe
  • Another option is to configure a proxy server (Squid) somewhere that allows general anonymous internet access but requires a login to do anything else. Then block access from your server at the firewall but allow the proxy through.

    If you only have one machine, I would echo swelljoe's suggestion. Or combine the two ideas and make everything more granular if you prefer :)

    From MikeyB
  • You can use a SELinux policy for this, but unfortunately it's a bit more complicated to set it up than the iptables solution.

  • I did this once using a combination of squid and "name" - a really old linux/unix service that provides the current username. Generally speaking, name is a really really bad idea (as its unencrypted and you can modifiy it pretty easy; it's used by irc btw) but for a known set of machines, it works pretty well

    From LordT
  • Your looking for a proxy, along with iptables rules. Use iptables to restrict port access and redirect traffic to the proxy. In the proxy you filter the content you do/don't want getting through. (The owner module only works for packets created on the firewall itself, not the packets coming from your network.)

    From David

MySQL differential dump? Other strategy for restore?

Is it possible to generate a differential dump with mysqldump - between two databases, or, ideally, between a database and a dumped version of that database?

Here's the issue I've got - I have an active/passive HA mirror of MySQL with the actual DB data (the physical MyISAM files, indexes, etc.) residing on a shared DRBD mirror. Last week, the primary node failed and the DRBD master shifted from the primary to the secondary node, and service takeover happened like it was supposed to.

Of course, a bunch of changes have been written to the secondary copy's version of the DRBD mirror, so when the primary comes back up, it takes over the DRBD volume but both sides consider their halves "out of sync" (i.e. StandAlone).

So, now I have a situation where there are two divergent sets of transactions that have taken place on the database:

  • That which happened in the time the primary node was down and the data was being written on the secondary;

  • That which happened since the primary node came back up and took the services over again; they were never synchronised!

DRBD gives me the ability to revert to either "half" of the mirror (in its current partitioned state) as the "master" revision, but as can be seen, either way causes me to lose data.

Oh, yeah: There was no replication and there were no local transaction logs, so there are no binlogs to replay. Oops. facepalm

There are nightly backups, of course, so I can revert the DB to just about any ~2 AM state from the past year.

I suppose what I'm looking to do is revert to the version of the database that's on the secondary "half" now (i.e. changes that happened while primary was down) and then try to somehow apply the changes in state from that point forward to the present state of the database on the primary's "half" cumulatively.

The problem is, I have no idea how one would go about that without replaying transaction logs.

Insights appreciated, and thanks in advance!

  • You shouldn't need to replay the data on the failed server. This situation is called split brain, and you will need to tell DRBD what you want it to do.

    http://www.drbd.org/users-guide/s-resolve-split-brain.html

    Edit: Didn't realize you said the primary came back online... not really sure. The binary log should contain a record of all the transactions executed by the server.

    Edit (Take 2): I need to make a note to read the whole question before posting... :( sorry

  • Ouch. Sucky situation to be in. I hope you've turned on binlogging now...

    Two tools that might be of help:

    • MySQLdiff - could probably work.
    • Maatkit - Look at this one first. I think it actually has some tools designed to help resolve split brain issues.

CIFS Volume mounting problem

Hi,

When I ran this command

mount -t cifs //192.168.10.1/ycw /mnt -o rw,user=testuser,password=testpass

I got mount error 112 "Host is Down" when tried to do directory mount from Debian machine to Centos machine.

Can anyone have idea to resolving this mount problem?

Waiting for the reply.

Jaby

  • Did you umount and then mount again? When you do this does mount give any errors or are there any error messages from the command 'dmesg' or in /var/log/messages?

  • Some questions:

    • Can you ping 192.168.10.1?
    • Can you access the share using smbclient?
    • As mentioned by Kyle, is there anything in your logs?
    From David

How to keep Apache conf files in sync in web cluster

What's the best practice for keeping httpd.conf and php.ini files consistent across multiple web servers behind a load balancer?

I could periodically rsync the files, or program a custom deployment script. What other options am I missing?

  • Our web nodes have an NFS cluster behind them. We symlink the default config file locations for both Apache and PHP to the shared copies on the the NFS share. A very simple shell script then does some basic housekeeping (rebuilding Apparmor rules, etc.) and then iterates over the web nodes triggering first an apache2ctl configtest and then apache2ctl graceful.

    Other ideas:

    1. Simple shell script wrapped around rsync or scp.
    2. For larger installations, a Puppet recipe could reliably deliver the updated config file and bounce Apache afterwards.
    From Insyte
  • Ideally, you would set up configuration management server by using Puppet or other similar tools (for example cfengine).

    The best practice i could suggest with regard to server configuration in general is:

    "Never log into a machine to change anything on it. Always make the change on the [configuration management server] and let the change propagate out" (taken from http://www.infrastructures.org/papers/bootstrap/bootstrap.html)

    Best regards, Alex

    David Pashley : +1 everyone loves the puppet.
    pboin : +1 for mentioning the infrastructures paper -- the most profound admin paper I've read. ( Which is a lot. )
    From alemartini
  • svn can be used to provide both version control and distribution of config files. git or bzr etc could also be used.

    edit the files on the 'master' server, svn commit them, and either a cron job or ssh can run then 'svn update' on the 'slave' servers.

    I tend to use ssh, and set up password-less access to allow the 'master' server to run a particular Makefile on the slaves - the Makefile runs 'svn update' and it's filestamp-based dependancy rules then decides what other things need to be done (e.g. generate hashed db user or group files from text, add or delete ip addresses from the nic, restart apache, and so on - any time when you can determine that a sequence of actions is dependant upon a file changing and/or another sequence of actions is a good candidate for using make to automate the process).

    even on the 'master' server, everything is driven by make. edit one or more config files, and then run make. the Makefile figures out what needs to be done (svn commit, generate config fragments from source input files, 'ssh make' to other servers, etc). using make like this ensures that every step is done in the correct order, and that no step is forgotten. and making svn an essential part of the process ensures that every change is committed with a time-stamp and an explanation in the commit log.

    of course, many would use puppet or something like it to do this. if i were to start over from scratch, i probably would too....but i've been doing it like this for years, long before puppet was around. there were similar tools around before puppet, but puppet is the first one i've seen that looks like it might actually be worth the trouble of changing...but it's hard to justify changing something that works well for something that only looks like it will probably work better.

    Badman : "I tend to use ssh, and set up password-less access to allow the 'master' server..." << IMHO, that does not sound like a very safe way of doing things.
    Craig Sanders : depends on how you set it up. you can set it up so that the ssh client can only run one particular thing, and nothing else.
  • At the moment I have a script sitting on each web server that copies the configuration from a share and update's the configuration information to mach the local server (ip, etc). While this worked fine for 3 servers, at 10 servers it's annoying (login to server, run script, logout, rinse, wash, repeat). I'd suggest a solution (such a puppet and friends) that push the changes to server when you have completed your changes to the local file.

    From David
  • There's several solutions.

    • Shared filesystem, like NFS or OCFS2.
    • Puppetmaster or equivalent configuration manager
    • Shell scripting a solution (equivalent to puppetmaster, but lightweight -- shell into server, update files, bounce apache)
    • With subversion, use a post-commit hook to do the above.
    • Manually doing the steps in the above shell script ... (here's your red stapler)

    The big question: How is your CONTENT (or the dynamic scripts that access your database for content) currently synchronized? If you think about it, config files are just another sort of content.

Cant restore postgresql database backup

The backup was created from a database with the UTF8 encoding using pg_dump. The backup is in the tar format.

I then created a new database on another server running the same version of postgreSQL (8.2.4) using this command:

createdb -E utf8 db1

When running pg_restore I get the following error:

pg_restore: [archiver (db)] Error from TOC entry 1667; 0 14758638 TABLE DATA table1 db1 pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0xc520

The original databse is no longer available.

How can I restore this data or find the byte sequence that is causing the problem?

  • Older versions of Postgres would allow invalid byte sequences to be entered into a database. There was a note about this and a suggsted fix in a recent releasenote:

    Some users are having problems loading UTF-8 data into 8.1.X. This is because previous versions allowed invalid UTF-8 byte sequences to be entered into the database, and this release properly accepts only valid UTF-8 sequences. One way to correct a dumpfile is to run the command iconv -c -f UTF-8 -t UTF-8 -o cleanfile.sql dumpfile.sql. The -c option removes invalid character sequences. A diff of the two files will show the sequences that are invalid. iconv reads the entire input file into memory so it might be necessary to use split to break up the dump into multiple smaller files for processing.

    If the database is not very large or complex it might be easier to locate the offending text in the original database and correct it before doing a new dump. A field that has user-entered input or contains data imported from other sources might be a culprit.

    From Console
  • It's probably the same issue I had once, when migratting from a 7.4 to a 8.2 db. I used the instructions on this web article to solve the problem. This presuppose that you still have access to the original database. Otherwise, you can probably restore it in an older version of Postgresql and give a try with that procedure.

    From edomaur
  • This little Perl script may save you : Repairing Broken documents that mix UTF-8 and ISO-8859-1

    Redirect the script output to a new file. All illegal characters should have been replaced with their correct UTF-8 incarnation. The script reads the input line by line, too, so it shouldn't need too much memory.

    From wazoox
  • I solved this problem with the following steps:

    pg_restore -f db1.sql-v db1.tar

    I then removed everything from the db1.sql file except for the table1 copy command. Then ran:

    psql -d db1 < db1.sql

    This then gave me the exact line number within the file where the error was occuring. I then opened the file and removed the problem character and re ran the script.

    From Simon

SQL Server and SourceSafe

I have a setup of two SQL server(these are independent, only for development and testing) running computers and I would like to be able to use source safe so that I can keep these two SQL servers in sync(they are on a LAN). I have SQL server 2008, though it works in 2000 compatibility mode; and Source Safe 2005.

I have source safe installed on both computers, and I have a source safe database on one of the computers. On the other computer(the one that doesn't control SS) I have a SQL database that I need to put into source safe. How do I do this? The toolbar buttons in SQL Server Management Studio are there, but they are greyed out, the only thing it will let me do that has to do with source safe is "launch source safe" and inside sourcesafe, there is nothing about SQL databases.

  • I believe Visual Studio for Database Developers has this feature. There are also other 3rd party products that can sync your schemas.

    A database is not like 'normal' code you store in source control - in order to sync the databases you don't just replace lines of code, you must generate ALTER(and other) statements.

    If you are creating a database from scratch each time, you can just use create statements, but keeping two live databases in sync requires some logic to sync.

    You could also write SMO (an API to manage SQL) to script the database to a file and then add that to source control.

    Here is an example of such a tool.

    We use redgate sql compare for such tasks.

    Also see: http://stackoverflow.com/questions/115369/do-you-source-control-your-databases

    Earlz : I am not using for Database Developers though, I am using the regular version. and I'm aware I could just dump the database to script, but the database is like 40mb, and I was figuring with how everywhere it says integrates with SQL server, that it would be a bit less painful than dumping and restoring database backups every check in
    From Sam
  • If you just have the regular VS, you can use the Database Project. Write scripts to update the database, then just run them against each database server. This is what I have used to move changes (sync databases) between dev, test, and prod because we do not have database edition either.

    The database project can then be checked into source control.

  • In SSMS, VSS integration means being able to check a "Sql Server Scripts", "Analysis Services Scripts" or "SQL Server CE Scripts" project into VSS. These projects are little more than a collection of text files-- unlike say a C# application project, which is a collection of files + a build script. SSMS projects lack the "build" part.

    3rd Party products like SQL Redgate will get you closer to what you are describing.

  • There is no full Integration of SQL Server with source control. Microsoft tried doing it years ago, and no one used it because it wasn't all that reliable.

    Most everyone simply keeps each object scripted out into a separate file and keep these files in source control. Then edit the file and deploy the change to the database at the same time. Then tag all the files you changed for the release.

    From mrdenny
  • It sounds like you not only want to version control the schema but the data inside that schema as well? Why not script your database for the schema and then use SSIS to extract your data into CSV files. You can compress these files and then store them in your SCM of choice.

    I would not recommend Visual Source Safe as it can corrupt your version control database very easily. Have a look at Subversion via TortoiseSVN for Win32

    Sorry if I misinterpret your requirements. But I dont think you will find a tool the version controls your entire db - schema & db.

    From Wayne

how to get subdomain to link with folder?

I am running xampp which has Apache on windows for a dev server. I need to use a subdomain of localhost to access my localhost/images/ folder by going to h##p://images.localhost

However I am having trouble, I have posted an image below showing my problem.

So the question is how to you setup a subdomain on Apache and actually have it work for a folder like I need?

Please view this image, sorry new users can't post more then 1 url and NO images so here is my 1 url to an important image

http://www.freeimagehosting.net/uploads/b0194b8e68.jpg

UPDATED VERSION

My apache conf file

NameVirtualHost localhost:8080

<VirtualHost localhost:8080>
   ServerName localhost
   ServerAdmin blah@blah.com
   DocumentRoot c:\server\htdocs
</VirtualHost>

<VirtualHost images.localhost:8080>
   ServerName images.localhost
   ServerAdmin blah@blah.com
   DocumentRoot c:\server\htdocs\images
</VirtualHost>


My windows host file

127.0.0.1 images.localhost
127.0.0.1 *.localhost
  • I would imagine that your problem is trying to do a subdomain of "localhost". Best thing to do would be to change your servernames to something like "localdomain.com" and "images.localdomain.com" (anything will do really), and then modify your hosts file to map that domain to 127.0.0.1

    Been a while since I've used windows (sorry), but I believe the hosts file is in c:\windows\system32\etc\hosts

    And the format for the fake domain would be something like:

    localdomain.com     127.0.0.1
    

    Then, you'd just need to bounce your XAMPP server and everything should be good to go.

    Hope that helps :)

    womble : I'm sure the actual owner of localdomain.com (or whatever actual domain the OP chooses as a result of your bad advice) will be thrilled.
    jasondavis : Actually I have a domain for the site it's just not hosted anywhere so I will try this with my domain and see if it works
    From Ian Selby
  • Configuration looks fine to me. Check for simple issues like restarting apache after changing configuration file, etc.

  • As described in the RFC 2606 .localhost. is treated special.

    But one other thing to look at: What is your default directory? It could be that non of your virtual settings are working properly. But instead the DocumentRoot of the server itself is always handling the requests.

    jasondavis : my doc root is c:\\windows\server\htdocs\
  • The problem is that 'localhost' always resolves to 127.0.0.1 but subdomains of localhost such as your 'images.localhost' are not defined and therefore are not resolving. You can correct this locally by editing your system's hosts file (usually in c:\windows\system32\drivers\etc) and adding the following line:

    127.0.0.1 images.localhost
    

    You could also add:

    127.0.0.1 *.localhost
    

    After saving the hosts file, your subdomains should resolve correctly.

    Edit: I see that you have the vhosts configured to listen on port 8080 but your URLs don't include the port number. You need to browse to the addresses with :8080 in them like this:

    http://images.localhost:8080/layout/homepage/welcome_image.jpg
    

    Alternatively, you can change the vhosts to listen on :80.

    I'm guessing the reason just localhost works is that your primary Apache conf has a Listen 127.0.0.1 and DocumentRoot c:\xampp\htdocs.

    jasondavis : Did you see the image in my post? I have all this done and nothing seems to work, I even tried using a real domain name in my hosts file and my computer will just timeout trying to load it, I have restarted apache and even rebooted my PC with no luck yet
    Dave Forgac : Did specifying the port number in your request work for you?
  • I even tried using a real domain name in my hosts file and my computer will just timeout trying to load it

    Maybe you have a problem with name resolution. You can try "ping images.localhost" from command line (Start -> Run -> "cmd")

    If that subdomain isn't working, maybe you can try with a real one. My last article might help: 42foo: all the virtual hosts you need for your web development

    I have this working on some domains, although I link them all to the same document root:

    <VirtualHost *:80>
            DocumentRoot /srv/apps/mydomain/current/public
            ServerName mydomain.com
            ServerAlias www.mydomain.com
    </VirtualHost>
    
    <VirtualHost *:80>
            DocumentRoot /srv/apps/mydomain/current/public
            ServerName assets0.mydomain.com
            ServerAlias assets1.mydomain.com
            ServerAlias assets2.mydomain.com
            ServerAlias assets3.mydomain.com
    </VirtualHost>
    
  • The problem, as far as I understand Apache virtual hosting anyway, is that you're using a catch-all in your Virtual hosts config and using a mutual DocumentRoot, rather than explicitly defining each VHost's name and DocumentRoot:

    *:8080 is your base NameVirtualHost setting.

    This will catch all requests to :8080, and it will redirect all requests to the virtual host that matches the directive.

    It becomes problematic when you have two Vhost directives that are named the same thing, and one of the directive's DocumentRoot matches the server's DocumentRoot. If there is no "Overriding" DocumentRoot that the server has separately, then Apache will evaluate each Vhost's DocumentRoot, finding the "Occam's Razor" of the values it encounters if multiple Vhost's share a root path.

    In this case ...\htdocs is the DocumentRoot for both, because ...\images is contained within ...\htdocs. Thus any requests will automatically default to the Vhost that offers only ...\htdocs as its DocumentRoot.

    I realize that was a bit confusing, so to fix this: Switch to name based virtual hosting.

    UPDATE (2009-08-25)
    There's something I need to clarify before we move on:

    Apache and other web servers never listen on :8080 by default. Also, web clients like Firefox never make a request to :8080 by default. I assumed that you understood that from your original post, since your VHost directive showed a non-standard port of :8080. Now, I'm not so sure that was clear.

    In order for the previous revision of my post to have worked for you as it was setup (without port 80 redirects or what have you), you would need to specify the port when requesting a page:

    http://localhost:8080 and http://images.localhost:8080

    I should have included that information. As I said, I assumed that was clear from your original post. My apologies for that. By extension, I also assumed you were setting it up this way because another server was listening on port 80. If that's true, then you might consider combining the two servers into one setup, or turn off the alternate while you do your work with xampp.

    So, let's correct the VHost directives to listen to port 80, which is the default port that pages are served from, and requested from, by servers and clients respectively.

    I'm also going to get anal retentive about defining the localized permissions for the folders that you're serving data from, since I'm worried that there is a / Directive that is munging your ability to get pages from your sites.

    UPDATED VirtualHost Directives (2009-08-25):

    NameVirtualHost localhost:80
    
    <VirtualHost localhost:80>
       ServerName localhost
    
       # Naturally, this can be changed to a real email.
       ServerAdmin blah@blah.com
    
       # Set our DocRoot for the VHost.
       DocumentRoot c:\xampp\htdocs
    
       # Define access perms for our DocRoot.
       <Directory "c:\xampp\htdocs">
         # We're going to define Options, Override perms, and Allow directives.
         # FollowSymLinks probably doesn't work in Windows, but we'll keep it for posterity.
         Options Indexes MultiViews FollowSymLinks
    
         # Disallow Override
         AllowOverride None
    
         # Setup 1. Allow only *from* localhost. Comment out the following 3 lines, 
         # and uncomment Setup 2 below to allow access from all.
         Order deny,allow
         Deny from all
         Allow from 127.0.0.0/255.0.0.0
    
         # Setup 2. Allow all. Uncomment the following 2 lines, and comment out Setup 1.
         #Order allow,deny
         #Allow from all
    
       </Directory>
    </VirtualHost>
    
    <VirtualHost images.localhost:80>
       ServerName images.localhost
    
       # Naturally, this can be changed to a real email.
       ServerAdmin blah@blah.com
    
       # Set our DocRoot for the VHost.
       DocumentRoot c:\xampp\htdocs\images
    
       # Define access perms for our DocRoot.
       <Directory "c:\xampp\htdocs\images">
         # We're going to define Options, Override perms, and Allow directives.
         # FollowSymLinks probably doesn't work in Windows, but we'll keep it for posterity.
         Options Indexes MultiViews FollowSymLinks
    
         # Disallow Override
         AllowOverride None
    
         # Setup 1. Allow access only from localhost. Comment out the following 3 lines, 
         # and uncomment Setup 2 below.
         Order deny,allow
         Deny from all
         Allow from 127.0.0.0/255.0.0.0
    
         # Setup 2. Allow all. Uncomment the following 2 lines, and comment out Setup 1.
         #Order allow,deny
         #Allow from all
    
       </Directory>
    </VirtualHost>
    

    Apache will get grumpy about this setup if there is no corresponding DNS name that it can find for a virtual host, so the recommendation earlier about modifying the systems's hosts file is still accurate. Note that when you edit the hosts file, you can put all the aliases for a single IP on the same line, saving you some confusion.

    127.0.0.1   localhost   images.localhost
    

    You can also do IP based Vhosting but I wouldn't recommend it. It's much more involved than what you're doing and is only "really" necessary when you're dealing with multiple vHosts using SSL.

    Anyway, the setup I've described works exactly as expected on my system (Ubuntu 8.10 x86_64, Apache 2.2.9) and should also work fine on yours.

    jasondavis : I added in your changes but it had no affect on my system for some reason, I posted my updates in my post above if you care to look, thanks
    Sean Lewis : Just in case there's no notification on updating an entry, I updated the entry with some additional information that will (hopefully) fix your problem.
    From Sean Lewis

Trouble sending an email with the PHP mail function. Full trace provided

I have a simple script setup:

<?php
mail('corgan1003@aol.com', 'Hello World', 'Testing a message');
?>

I cannot send email from my server to AOL accounts. The error details are below. GMail lets me send the message...so I guess AOL is just a bit stricter.

   Starting tcpick 0.2.1 at 2009-08-03 22:25 UTC
    Timeout for connections is 600
    tcpick: reading from tcp_dump.pcap
    1      SYN-SENT       67.23.28.65:49516 > 64.12.138.153:smtp
    1      SYN-RECEIVED   67.23.28.65:49516 > 64.12.138.153:smtp
    1      ESTABLISHED    67.23.28.65:49516 > 64.12.138.153:smtp
    220-rly-mg05.mx.aol.com ESMTP mail_relay_in-mg05.6; Mon, 03 Aug 2009 18:25:34 -0400
    220-America Online (AOL) and its affiliated companies do not
    220-     authorize the use of its proprietary computers and computer
    220-     networks to accept, transmit, or distribute unsolicited bulk
    220-     e-mail sent from the internet.  Effective immediately:  AOL 
    220-     may no longer accept connections from IP addresses which 
    220      have no reverse-DNS (PTR record) assigned.
    EHLO bandop.com
    250-rly-mg05.mx.aol.com fallsroadsunoco.com
    250 HELP
    MAIL FROM:<www-data@com>
    501 SYNTAX ERROR IN PARAMETERS OR ARGUMENTS
    RSET
    250 OK
    QUIT
    1      FIN-WAIT-1     67.23.28.65:49516 > 64.12.138.153:smtp
    2      SYN-SENT       67.23.28.65:45729 > 216.239.113.101:smtp
    1      FIN-WAIT-2     67.23.28.65:49516 > 64.12.138.153:smtp
    221 SERVICE CLOSING CHANNEL
    1      RESET          67.23.28.65:49516 > 64.12.138.153:smtp
    3      SYN-SENT       67.23.28.65:45729 > 216.239.113.101:smtp
    tcpick: done reading from tcp_dump.pcap

    20 packets captured
    3 tcp sessions detected

Do you know how I can make the FROM parameter come out correctly? Setting the FROM header in the PHP mail function does not work.

UPDATE

This little hack is working but I would prefer to fix this issue outside of PHP.

mail('corgan1003@aol.com', 'Hello World', 'Testing a message', null,'-faddress@domain.com');

I am super noob with mail servers

  • With AOL they often won't go through unless you fill out the form to get white listed. To change where the mail is coming from, edit the sendmail_path parameter in the php.ini file. This is mentioned in the php mail() doc under additional_parameters.

    For Example:

    sendmail_path = "/usr/sbin/sendmail -t -f me@kyle.com"
    

    You can also pass instead a parameter to the mail() function:

    <?php
    mail('nobody@example.com', 'the subject', 'the message', null,
       '-fwebmaster@example.com');
    ?>
    

    Lastly, you might find alternative mail package more flexible, such as PEAR MAIL package or msmtp, you can specify other smtp servers.

    Tony : right now i am using phpmailer because the site is wordpress based and it comes stock. only issue is i have to change a wordpress file to create this setting which is bad practice. i would like to know how to fix this in postfix settings. it would be great to have postfix use the virtual host name as a host as well.
  • Try escaping the at sign.

    mail('corgan1003\@aol.com', 'Hello World', 'Testing a message');
    
  • You want to change the envelope From: address, which is different from the From: header. See this comment in the mail() function's doc.

    The envelope address depends on your MTAs configuration - in your case, 'www-data' is the user your script runs as, and 'com' is (part?) of your machine's hostname. Assuming you are on *nix, you can try to override the envelope address like this:

    mail('corgan1003@aol.com', 'Hello World', 'Testing a message', null,'-faddress@domain.com');
    

    where address@domain.com is the envelope sender address you want to show.

    If that works, and you have access to your php.ini file, you can set the envelope sender address there - see Kyle's post.

    You might also want to have a look at your MTAs (sendmail, postfix) configuration - looks like it has a problem with your hostname setting. Changing php-s settings will fix it for php, but if something else (cron, logwatch) on your system wants to sent mail, it would be helpful to have a working MTA.

    Edit after your comment: It's hard to suggest anything without knowing your mail server's config, but for a start, try the following:

    myhostname = mail.virtualhostname.com
    mydomain = virtualhostname.com
    myorigin = $mydomain
    masquerade_domains = virtualhostname.com
    
    Tony : yes, i would rather fix this for postfix and not just PHP. the postfix configuration file is huge though. all i really want is the FROM field to be "noreply@virtualhostname.com". I set myorigin = $mydomain but that did nothing. Maybe you could provide some direction?
    Tony : my mail server's config is the default postfix installation. i haven't done anything fancy. adding these attributes to the config does not seem to change the envelope's FROM address

How to backup an MSSQL Database from another host using osql?

Hi ServerFault Family:

We have been performing our sql database using osql as follows:

osql -s IP-9873743\SQLEXPRESS -E -Q "BACKUP DATABASE DBNAME TO 
Disk ='e:\sysbkup\sqlbackup\DBNAME.bak' WITH NOFORMAT, 
INIT, NAME = N' DBNAME-Full Database Backup', 
SKIP, NOREWIND, NOUNLOAD, STATS = 10"

We want to do the same from another host but when we add the -H for host option on the command we get "The login failed". With MS SQL Server Management Studio, we are able to login to the other server fine with Windows Authentication since my machine and the other host are using the same usernames with same password.

When I run the backup statement from the MGMT Studio console the backup is saved to the disk of the other host, instead of transferring the backup to the client machine. Any ideas how I will be able to backup an MSSQL database from another host different than the mssql engine itself?

  • You should be specifying the server name in the -s parameter, not -H, though this will still backup to the server the SQL service is running on.

    In my experience SQL Server is fussy about operating on network drives, but you could try specifying a UNC path (\\<server>\<share>\<backup_file>) to the place you want the backup to go to on the network. I believe that the user the SQL Server instance is running as will need write access to the destination machine, I don't think it will use your account despite logging in via Windows integrated security, so you will need to arrange that first.

    Geo : Thanks David. I tried both with -H and with -s. When I did the backup using ms mgmt studio the backup was done in the host c drive instead of the client host c drive.
    David Spillett : The path you specify is _always_ relative to the server SQL is running on, not the client that asks it to run a backup. That is why you need to use a UNC path, though for that to work you need to ensure the user the service runs as has write access to that location.
    Jason Cumberland : Don't forget that it's an upper case S, and if you're using the SQL 2005 client tools then switch to SQLCMD.exe instead. If the remote SQL Server service account is not a domain acocunt or network service then using a UNC will not be possible. osql/sqlcmd -E -S server\instance -Q "backup database db to disk = '\\otherhost\share\db.bak' with init, stats=10"

What protocols/ports to open for WinSCP

Which ports (and/or protocols) should be given priority for WinSCP to connect to a *nix machine?

Details:

I have a Windows client running WinSCP, which connects through a WRT54 router (running Tomato) to a remote Ubuntu server. The Tomato router has Quality of Service options which allow me to specify what ports and protocols get priority.

What settings do I need to add to classify WinSCP?

  • If you are using SCP or SFTP (which is most likely) then port 22 TCP will be used as the connection will be using the SSH protocol.

  • You will want to use the TCP destination port of 22 to classify the traffic (with optionally the destination IP of the server). The source port will change and not always be the same. Ports are part of the TCP header and destination IP is of course part of the IP header.

    The port SSH is listening on the server doesn't have to be 22, but that is the default pot for the SSH protocol (including sftp). It wouldn't be different from the default if you didn't have to change the port in winscp when setting up the connection.

    Josh Brower : -1 for not a complete answer: The destination port will not always be 22. It will be dependent on what has been set on the Ubuntu box.
    grawity : But in that case, it would be impossible to answer the question correctly.

Is this a good server for a web app?

I need a new dedicated server to host my web app and am wondering what kind of load (average requests per month) this server could support. It will be running Windows Server 2008 Standard, and SQL Server 2008. The server info is:

AMD Opteron 1218

6 GB DDR2 Memory (up to 667MHz)

2 x 500 GB 7200 RPM SATA II , RAID 1

Unlimited (10mbps) bandwidth

Also, is $129 a month a good deal for this hosting?

Thanks!

  • Per month that can support a crazy high number of page views. Think smaller. How many page view per minute can this server support. That's an easier number to process.

    The number of requests that can handle all depend on how database driven your site is, and how well designed your database is. For just HTML pages with very little database work being done that could probably have 10k+ a minute without issue if not higher (that's only 166 per second). If you have a large database that isn't correctly designed and optimized then that number could drop to as low as just a few hits per second as the CPU and memory will all be taken up by the SQL Server.

    Can't say anything about the cost, we host our own servers in a CoLo.

    From mrdenny
  • There are so many different things you can do with a machine, that simply nobody will be able to tell you how far you come with that server - it will depend on how much overall data you have to deal with, how much data is going back and forth between server and the web clients, how many, and how complex database requests will happen - and that's just a rough overview. If your application is programmed well, you can have a lot of throughput, if it is built badly, you can reach the limit much sooner.

    As for the plain data throughput similar - a good designed server hardware can give you much more performance than a bad one - even though the numbers above are the same. And bandwidth is nice, but it depends a lot how the datacenter is connected and built.

    So, to know if the offer is good compared to others, you have to look at hosting comparison sites to see that rattings your provider gets - to calculate if that server is enough to handle your application, you'll have to test it.

    From Henning
  • In terms of load, the dual core Opteron is fine, the RAM is more than enough, but how much traffic are you anticipating/hoping/wishing to have? In addition, is your Win2k8 64-bit? Is SQL Server 64-bit? Have you thought about the potential security and performance issues having the web server and database server on the same box?

    Again, without having all the information or particulars about your setup, if your web app (I assume it's only 1 web application) is just starting out and you're not anticipating a large load immediately, why not consider shared hosting first? It'll be cheaper (in the beginning) and most providers usually make upgrading easier to server hosting rather than going down from server hosting to shared.

    But to get to your question:

    I need a new dedicated server to host my web app and am wondering what kind of load (average requests per month) this server could support.

    If you're talking about static pages (cached preferably) with little to no database interaction. You could easily do hit 10k/min like mrdenny mentioned. I can't forsee any issues with taking that kind of load. Bear in mind some IIS7 tuning is necessary, but the hardware should have no issues in dealing with that.

    Now if you're taking about one ASP.NET webapp with heavy database interaction and with a lot of performance tuned programming (app pool tuning, using partial cache, refrain from viewstate, using inproc sessions or cookie sessions, etc. etc.) I'd be fairly optimistic and say that ~250/min is achievable. I'd really like a little more explanation of what you're trying to do rather than the equipment you're planning to use.

    Also, is $129 a month a good deal for this hosting?

    It's a little steep but I'm sure they have 24/7 support, high SLA uptime, redundant power available on-site, etc. etc. Personally, if the traffic isn't going to really be used and the server isn't really being utilized (~65%-70%+ utilization), maybe you're better off starting with shared hosting. Sorry to be repetitious, but don't spend all the money at the beginning. Most providers are helpful if you need to move from shared hosting to server level hosting anyway.

    From osij2is
  • It's impossible to say what's "enough" without actually testing it. As others have mentioned, it varies by content. Are you doing lots of server side/database work? Are you serving lots of images or video? All of this could affect your decision.

    I was serving out many images at one time and decided to go with Akamai to distribute them. It freed up my server for grunt work and let the CDN deliver the static data. Considerations like these matter.

    Based on your description, assuming the pipe is fat and that you get decent access to the configuration, it sounds like an OK deal - but, you should probably take the time to research the provider you're considering joining. Google them and see what comes up.

    From Joe
  • Sounds likes its ample in terms of server resources. However, I wouldn't look for a server until you have done some load testing to determine basic load of the application. Get some sort of request recorder to capture some sort of basic session on the server.

    From this you should be able to estimate some performance characteristics such as:

    • Average page size which can indicate how much bandwidth might be consumed for a given set of users

    • How long pages take go generate, giving you an idea if your app is cpu intensive in generating the html served up.

    • How much ram is consumed in the test environment under this load, will more ram on a bigger server help/hurt

    from this you can deduce if you need to rework the app a bit and then make an informed decision if this server fits your needs at a price you are willing to pay.

    From MikeJ

How do I tie OSX SSH User authentication to a custom 'users' mysql database?

We have a client intranet with client user credentials stored in a mysql database.

We are now trying to enabled SSH access to one of our servers for each client - where the authentication would come from our existing database.

Any help would be awesome.

  • It sure looks like OS X uses PAM. In that case you should be abe to use the PAM-MySQL to perform any type of auth you want. Out of the box OS X uses a pretty straightforward PAM config for sshd:

    $ cat /etc/pam.d/sshd
    # sshd: auth account password session
    auth       required       pam_nologin.so
    auth       optional       pam_afpmount.so
    auth       sufficient     pam_securityserver.so
    auth       sufficient     pam_unix.so
    auth       required       pam_deny.so
    account    required       pam_securityserver.so
    password   required       pam_deny.so
    session    required       pam_launchd.so
    session    optional       pam_afpmount.so
    

    I haven't set up PAM-MySQL before, but assuming it's similar to other external database PAM modules, there will be a config file that you use to select the db credentials, which tables should be used, etc. Then you would insert auth sufficient pam_mysql.so just before the pam_unix.so line in /etc/pam.d/sshd.

    Theoretically that should be all you need.

    Insyte : I've confirmed that the pam_mysql project I linked to above will build just fine as long as you have the MySQL libraries installed. And it looks like instead of a config file the various options are passed as arguments in the "pam.d/sshd" file. I can't test any further without actually building up a dummy database, but it sure looks promising.
    Insyte : Take a look a the README file included in pam-mysql. I was incorrect about the settings being stored in a config file; they're appended to the pam file like so: "auth sufficient pam_mysql.so user=dbuser passwd=dbpasswd table=users usercolumn=myusers passwdcolumn=mypasswds". There are several other options that can be used as well, all well described in the README file.
    From Insyte
  • There are probably a couple ways you could do this:

    1. Set up an Open Directory master, bind your server to it (or maybe that server would be the OD Master), and write some hooks for your client intranet that add/remote/update users in OD whenever there is a change
    2. Write a Directory Services plug-in that is installed on your server which would talk to your MySQL database

    For the 1st option, see Apple's Mac OS X Server documentation, esp. those relating to Open Directory. There is a dscl command which can be run from scripts to add/remove/update entries in Open Directory.

    For the Directory Services option, see Apple's Directory Services documentation, esp. the Writing Open Directory Plug-ins document.

    There are probably other ways, but these are the two that jumped to mind.

    morgant : Upvoted the PAM answer as that should be exponentially easier to implement than either of my suggestions.
    From morgant

Need help with Powershell error "The left hand side of an assignment operator..."

I've got a Powershell script that I'm trying to set up so it can send an Exchange status email to me everyday. I've got the script working just fine when I run it manually from an EMS console window, but when I try to add it as a scheduled task, I need to add the line Add-PSSnapin Microsoft.Exchange.Management.PowerShell.Admin at the top. This addition seems to be causing a problem, as when I try to run the script from the task window, I get this error:

Add-PSSnapin Microsoft.Exchange.Management.PowerShell.Admin   
param(
$MailServer = "mailserver",
$MailTo = "me@company.com",
$Mailfrom = "me@company.com",
$Subject = "Exchange System Status " + (Get-Date))
$body = Get-MailboxDatabase -Status | select Name,LastDifferentialBackup,LastFullBackup | 
Out-String
$body2 = Get-ExchangeServer | where {$_.ServerRole -Match "HubTransport"} | Get-Queue | select Identity,Status,MessageCount,NextHopDomain | Out-String
$email = new-object system.net.mail.mailmessage
$email.to.add($MailTo)
$email.from = $Mailfrom
$email.subject = $Subject
$email.isbodyhtml = $False
$email.body = $body,$body2
$client = new-object system.net.mail.smtpclient $mailserver
$client.send($email)

When I have that PSSnapin line at the top and I run the task, I get this error: Invalid assignment expression. The left hand side of an assignment operator needs to be something that can be assigned to like a variable or a property

Taking the line out and then trying to run the task obviously wouldn't work since it doesnt have the Exchange snap in the default powershell window. I'm calling the script using a batch file in the scheduled task with the command: Powershell -command "& {C:\Scripts\exchemail.ps1 }"

  • It may not be that one line that is the issue...just when it is commented out the whole thing breaks, so the actual error doesn't get a chance to surface.

    Try breaking this script up into multiple lines, assigning variables and properties separately, and you should be able to narrow down the issue.

    Agent : Not sure which comment marks you are referring to, but I just edited the script in the original post since the formatting seemed to be screwed up initially. The error message references line 4, char 10 as the problem, but I'm not even changing anything in that line when switching between manual and scheduled/batch files.
    From Adam Brand
  • Could it be an issue with quotes and/or escaping characters? Maybe the difference is not in the one line added/removed, but in the way you run it?

    Agent : I've tried assigning values to the variables with single quotes instead, but still get the same error. For running it, I'm following the guide at this link, which says to just use a batch file to launch powershell along with the script as the input http://exchangeshare.wordpress.com/2008/12/08/how-to-schedule-powershell-script-for-an-exchange-task/
    Adam Brand : What I mean is, don't use Param. Call everything manually as it is easier to trace.
  • Instead of trying to figure out what's wrong, I'll suggest what works 100% for me.

    This script gets mailbox stats, but you can adapt it to do whatever you want.

    Contents of Get-MailboxStatistics.ps1:

    $FromAddress = "noreply@company.local"
    $ToAddress = "sysadmin@company.local"
    $MessageSubject = "Exchange Mailbox Size Report"
    $MessageBody = "Attached is the current list of mailbox sizes."
    $SendingServer = "exchange.company.local"
    
    Get-MailboxStatistics | Sort-Object TotalItemSize -Descending | Select-Object DisplayName, @{Name="Size(MB)";Expression={$_.TotalItemSize.Value.ToMB()}}, ItemCount, LastLogonTime | Export-CSV -path "mailboxstats.csv" -notypeinformation
    
    ###Create the mail message and add the statistics text file as an attachment
    $SMTPMessage = New-Object System.Net.Mail.MailMessage $FromAddress, $ToAddress, $MessageSubject, $MessageBody
    $Attachment = New-Object Net.Mail.Attachment("mailboxstats.csv")
    $SMTPMessage.Attachments.Add($Attachment)
    
    ###Send the message
    $SMTPClient = New-Object System.Net.Mail.SMTPClient $SendingServer
    $SMTPClient.Send($SMTPMessage)
    

    This is run by a scheduled batch file containing this line:

    C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -PSConsoleFile "D:\Exchange\Bin\ExShell.psc1" -Command C:\Scripts\Get-MailboxStatistics.ps1
    
    Agent : Graeme, after futzing around with what I was doing initially, I used your method, which works great. Thanks, I can see many more potential uses for this combo now!
  • this script must be missing stuff. You are using Param, which must be the first line in script block. What is likely happening is that powershell is looking at this as if you typed

    Add-PSSnapin Microsoft.Exchange.Management.PowerShell.Admin   mailserver","me@company.com","me@company.com",Exchange System Status ...
    

    You are missing a function declaration and braces (if that's what you are trying to do). There is no function in the page you mention as source

    From Jim B