Thursday, January 27, 2011

When I connect to our Windows VPN, my Outlook client loses it's connection

The title basically says it all, but I will elaborate.

I have my Outlook client setup to use the Outlook Anywhere feature. This works fine when not connected to our VPN. However, when I connect to the VPN the Outlook client loses its connection.

I setup our VPN using Windows Server 2003's RRAS which seems to work fine but it seems like this must be causing the issue.

My workstation is running Windows 7 Professional and I have the properties of my VPN connection set to NOT use the default gateway on the remote network (so I'm using my local gateway and am able to browse the internet without issue).

Does anybody have any idea what the problem could be? This is very frustrating.

Thanks in advance.

  • What DNS servers do the VPN clients get? I'm assuming that Outlook Anywhere is configured to use a public FQDN that can't be resolved whilw connected to the VPN. When the VPN client is connected can it resolve the FQDN of the Exchange server?


    EDIT 1

    Just so I have a better understanding, we're referring to the public FQDN of the Exchange server, right? If so then it's safe to assume that your internal and external DNS namespaces are different (.com and .local, or whatever), right?


    Edit 2

    Now that we've established what the problem is we have to determine what an appropriate solution is. There are a number of ways to tackle this and although I've never encountered your specific problem here are some notes and some suggestions:

    Notes:

    RRAS will provide DNS server settings to VPN clients via one of two methods: 1. If the RRAS server is configured to allocate ip addresses to the VPN clients from DHCP (internal DHCP server) then the DNS server settings configured in the DHCP server will be asigned to the VPN clients. 2. If the RRAS server is configured to allocate ip addresses to the VPN clients from a static pool on the RRAS server itself, then the RRAS server will assign whatever DNS server settings are configured in the TCP properties of the NIC on the server that is configured for incoming VPN connections.

    Suggestions:

    One way to allow VPN clients to resolve both internal and external DNS records would be to set up another internal DNS server as a forwarding only server (this could probably be the RRAS server itself). On this forwarding DNS server you can configure it to use publicly available DNS servers for external DNS resolution and configure it to use conditional forwarding to use your internal DNS servers for internal DNS resolution. Configure the RRAS server to use a static ip address pool for VPN clients that resides within your LAN subnet (to allow connectivity to internal resources) and set the NIC that is configured for RRAS to use this new DNS server for DNS.

    This affectively creates a scenario where the RRAS server assigns the DNS server(s) that it's RRAS-bound NIC is configured to use to the VPN clients. When the VPN client needs to resolve an external DNS record the new DNS server will forward the query to whatever public DNS server you've configured it to use. When the VPN client needs to resolve an internal DNS record the new DNS server will forward the query to your internal DNS server, based on the conditional forwarding you configure on the new DNS server.

    In review this seems a little complicated and may be "over engineering" the solution. You may want to see if anyone else chimes in with a simpler, more "elegant" solution.

    ThingsToDo : No, I'm not able to resolve the FQDN of the Exchange server. What do I need to change on either DNS or RRAS so that the VPN clients will be able to connect while connected to ths VPN? I believe this is also causing other DNS issues that I haven't been able to resolve.
    joeqwerty : See my edits. Also, Outlook Anywhere wasn't really intended for use via VPN, one of the reasons being the problem you're experiencing now.
    ThingsToDo : Edit 1: Yes, they are separate (.com and .local)
    ThingsToDo : I'm not sure that this fully solves my problem (or at least not the way I want it to be solved) but you've been way too helpful to not award you the checkmark. You've helped me out a few times in the past (perhaps under different names)...just wanted to say thanks for sharing your information so freely.
    joeqwerty : Glad to help...
    From joeqwerty
  • Set the TCP/IP properties of the VPN adaptor on the RRAS server to include the DNS server that would resolve the mail server FQDN to a VPN address.

    For more discussion of RRAS and DNS see this thread.

    From imoatama

robocopy transfer file and not folder

I'm trying to use robocopy to tranfer a single file from one location to another but robocopy seems to think I'm always specifying a folder. Here is an example:

robocopy "c:\transfer_this.txt" "z:\transferred.txt"

But I get this error instead:

2009/08/11 15:21:57 ERROR 123 (0x0000007B) Accessing Source Directory c:\transfer_this.txt\

(note the '\' at the end of transfer_this.txt)

But if I treat it like an entire folder:

robocopy "c:\folder" "z:\folder"

It works but then I have to transfer everything in the folder.

How can I only transfer a single file with robocopy?

  • See Robocopy /?

    Usage : ROBOCOPY source destination [file [file]...] [options]

    robocopy c:\folder d:\folder transfer_this.txt

    From KPWINC
  • Acording to the Wikipedia article on RoboCopy:

    http://en.wikipedia.org/wiki/Robocopy#Folder_copier.2C_not_file_copier

    Folder copier, not file copier

    Robocopy syntax is markedly different from standard copy commands, as it accepts only folder names as its source and destination arguments. File names and wild-card characters (such as ".*") are not valid source or destination arguments. Files may be selected or excluded using the optional filespec filtering argument. Filespecs can only refer to the filenames relative to the folders already selected for copying. Fully-qualified path names are not supported. For example, in order to copy the file foo.txt from directory c:\bar to c:\baz, one could use the following syntax*

    robocopy c:\bar c:\baz foo.txt
    

Messages stuck in sendmail queue beyond confTO_QUEUERETURN lifetime

CentOS 5.x | SendMail

Hi Guys,

Messages are stuck in my server's /var/spool/mqueue/ folder beyond the lifetime I have specified in confTO_QUEUERETURN (5d). Any idea why this could be? The file permissions appear fine; files in the mqueue folder show rights of:

-rw------- 1 root smmsp

This is causing an issue because the queues are slowly getting larger and larger.

Any thoughts?

-M


Additional information... I'm seeing the queue size consistently growing. maillog shows entries like:

grew WorkList for /var/spool/mqueue to 28000

Any thoughts?


Just thinking outloud -- could the queue runner not be completing it's job in time? Maybe I could check with time sendmail -q -v

Any thoughts?

  • Although I have very little experience with CentOS, I do seem to recall seeing some flavor of linux that didn't have sendmail configured with a queue runner by default. I would be curious to see if your old messages are removed after running 'sendmail -q'. If that's the case, then I think you just need to configure your queue runner to run periodically.

    From unixguy

PHP5, curl, PostgreSQL, SSL, and segmentation faults (ubuntu 10.04)

This ONLY happens over SSL..

When I load my PHP extensions like so:

extension=pgsql.so
extension=gd.so
extension=mcrypt.so
extension=memcache.so
extension=pdo.so
extension=pdo_pgsql.so
extension=pdo_sqlite.so
extension=curl.so

I see segmentation faults like child pid xxxxx exit signal Segmentation fault (11) It seems to be between postgres (pgsql) and curl. Commenting out curl and everything works fine- but, I need curl. I Googled a bit and this seemed to be an older issue that had been resolved, but it's happening to me now, with PHP5.3.2 and postgres 8.4 libraries from the standard Ubuntu packages.

Any thoughts? Some installed packages:

i   libssl0.9.8                     - SSL shared libraries
....   
i   postgresql                      - object-relational SQL database (supported
i   postgresql-8.4                  - object-relational SQL database, version 8.
i   postgresql-8.4-plr              - Procedural language interface between Post
i   postgresql-client               - front-end programs for PostgreSQL (support
i   postgresql-client-8.4           - front-end programs for PostgreSQL 8.4
i   postgresql-client-common        - manager for multiple PostgreSQL client ver
i   postgresql-common               - PostgreSQL database-cluster manager
i   postgresql-contrib              - additional facilities for PostgreSQL (supp
i   postgresql-contrib-8.4          - additional facilities for PostgreSQL
i   postgresql-doc                  - documentation for the PostgreSQL database
i   postgresql-doc-8.4              - documentation for the PostgreSQL database
i   postgresql-server-dev-8.4       - development files for PostgreSQL 8.4 serve

And then php5-curl.

Apache - php executing on http is ok, but with https is not

hello,

i have new dedicated linux web server.

my hosting provider give me a setup of apache server with php on it.

when i open url in browser called with ip, ie: http://xxx.yyy.zzz.vvv/test.php, i get executed php script, and it works fine. so, everything works fine in that case.

problem occur if i call https in a browser, like https://xxx.yyy.zzz.vvv/test.php

in that case, i get browsers option Save as, and all i can do is save php file on my pc.

so, it looks to me that there is some misconfiguration with apache.

providers support told me that this will work ok when i build certificate in apache server. but, i'm not sure in that.

can you tell me if providers support is right.

also, on server is installed plesk. plesk made a lots of problems in the past. could it be that plesk made that problem?

if you can help me to solve this. thank you in advance!

  • Hi,

    your host lies, if there's no certificate you will get a bad certificate message not a source code download. My guess is that your https settings are too way strict avoiding script from being executed. I'm sending a copy of a proper configurated https .conf file:

    NameVirtualHost domain.tld:80 
    <VirtualHost your_server_ip:80>   
    ServerAdmin webmaster@domain.tld   
    DocumentRoot /path/to/site/root/  
    ServerName domain.tld  
    ScriptAlias /cgi-bin/ "/path/to/site/root/"  
    </VirtualHost>  
    
    NameVirtualHost domain.tld:443  
    <VirtualHost your_server_ip:443>  
    SSLEngine on  
    SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL  
    SSLCertificateFile "/path/to/your/file.crt"  
    SSLCertificateKeyFile "/path/to/your/file.key"  
    
    <FilesMatch "\.(cgi|shtml|phtml|php)$">  
        SSLOptions +StdEnvVars  
    </FilesMatch>  
    BrowserMatch ".*MSIE.*" \  
             nokeepalive ssl-unclean-shutdown \  
             downgrade-1.0 force-response-1.0  
    
    CustomLog "logs/domain.tld-ssl-request_log" \  
              "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"  
    
    DocumentRoot /path/to/domain/root  
    ServerName domain.tld  
    ServerAdmin webmaster@domain.tld  
    ScriptAlias /cgi-bin/ "/path/to/domain/cgi-bin/"  
    </VirtualHost>  
    

    At your's httpd.conf you might want to include/check for this:

    <IfModule ssl_module>  
    SSLRandomSeed startup builtin  
    SSLRandomSeed connect builtin  
    Include /etc/httpd/conf/ssl/*.conf  
    </IfModule>
    

    In my case i have separated files for domains with ssl certificates, so i include them on the statement above.

    And finally make sure you have the OpenSSL pack installed on your server.

    That's it, you can generate self signed certificates to test it out.

    From Rodrigo

How can I set a disk quota for a group (vs a user)

How can I set the disk quota for a group (vs. a single user) on an NTFS volume?

I'm using Windows Server 2003 SP2.

  • As I commented to my unfortunately incorrect answer to your previous question - "Quotas only exist for users. My bad."

    From mfinni
  • I've never used it, but Windows 2003 R2 introduced something called File Server Resource Manager which apparently gives you more options when it comes to quotas. In the original implementation, NTFS' quota support applied to users and volumes, not (sub)folders/groups.

    Edit: just noticed you said SP2 and not R2. Not sure if you actually have R2, but I'll leave this here anyway.

Sendmail/Postfix Adding Linux User Account to From header line.

I just moved to a new server, we're using postfix now instead of sendmail. The issue is that mail sent from PHP using the mail command (which interfaces with /usr/sbin/sendmail/) shows up in the clients inbox, and the 'friendly' name shown is Apache. Which is obviously confusing to an end user, causing them to wonder why they are being emailed by a native american indian tribe.

Postfix (sendmail wrapper) is taking whatever you put in the -f parameter, and tacking on the Linux user who called the program. So the from line in the header winds up looking like this:

From: sales@whatever.com (Apache)

Causing the client to use whats in the parenthesis as the 'friendly name'.

I could manually set the from header in php, but i'd rather just stop postfix from doing that, because i'd have to edit php code in a hundreds of places.

  • My understanding of your issue is that the "GCOS" field of the password entry for the apache user is being sent. Do you really need a "friendly name" as you call it? If not, have you thought of removing the GCOS name from the apache password entry? Then there should be no "friendly" name attached to these email notes.

    profitphp : Wow, that was the last place i thought the issue would be. I just edited my passwd file and cleared the "GCOS" field, works as expected now. thanks!
    From mdpc

why won't php 5.3.3 compile libphp5.so on redhat ent

I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use.

==========

'./configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs'

...

** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped.

** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc

Build complete. Don't forget to run 'make test'.

=============

php 5.2.13 recompiles just fine so something is up with 5.3.3.

Any help would be greatly appreciated!!

  • I unfortunately have the same problem, where PHP 5.3.2 compiles just fine with the same configure statement.

    This makes me think that something might have broken on certain platforms (mine included) when 5.3.3 was released.

    I have an open bug for this at http://bugs.php.net/bug.php?id=53116

    brandon k : I would like to add that if you are in a hurry to get to the new features in 5.3 that you should try 5.3.2 - You shouldn't have any problems getting it to compile, and when we figure out what's wrong in compiling 5.3.3 you'll already have new code to run on it.
    From brandon k

Prevent non-replication writes to MySQL slave?

We have some MySQL database servers set up with row-based replication, for performance. The software writes to the master, and reads from either the master or the slave. Everything's working great, for the most part.

It's my understanding that MySQL will allow writes to the slave, even though it knows it's a MySQL slave. Ideally, I'd kind of like to close this, so even if somebody writes some bad code that gets a read-connection and does an UPDATE, it will throw an error rather than put data on the slave.

Is there a way to do this in MySQL? Obviously we'd like to make this impossible from our software, as well, but like a firewall on our servers, I'd like to be as defensive as possible.

Thanks!

  • Enable the read-only option in my.cnf. It can also be specified as a flag on the command line using --read-only with mysqld.

    vmfarms : Note that this will not work for superusers (ie: root user in MySQL) as it doesn't obey read-only.
    From Warner
  • Only give replication related rights to the users on the slave. You still have the issue of root user rights, but you can remove remote root access to the DB server.

    From Craig

DHCP giving errors on static IP

I believe it's the dhcp, can't seem to figure out. The office has about 35 computers and most of them dynamically set-up IPs. A few computers cannot access the Internet unless you manually input the IP and DNS. And that works for three to four days until you have to renew the IP and input a different address. What could be the problem?

  • It is probably that your DHCP server is not setup to give out enough IP addresses. Expand the DHCP range on your server and you should be fine.

    From Jason Berg
  • What are you using for your DHCP server? Is it a Windows server of some type, or is it your router?

    I'm betting it is your router, because it sounds like the router is blocking access from IP addresses that it has not asigned, or that are outside its range.

    What model router do you have?

    If these questions are all foreign to you, we can start more basically... what is the output of the command "ipconfig /all" on a machine that is working properly? ie, one of the machines that you do NOT have to futz with every couple of days...

    Post it back, and I'll give you some more to troubleshoot with.

    HTH,

    Glenn

  • Agree with Jason on this one. First thing to check is the DHCP Scope and make sure you have a sufficient ip address range to service all of the clients.

    From joeqwerty
  • Do you have multiple dhcp addresses for systems that require static addresses? They expire, so the next time it happens, check allocations to see if you have any doubles. If so, you'll have some fun reading to do on dhcp lease expiration (I'm sure there are a few questions here detailing the broad strokes)

  • Also make sure your DHCP range is not overlapping with addresses that are statically configured. Most DHCP servers will detect this and diable the address from being given out again.

    If your have no or bad IP documentation I recommend you download the nmap utility and run it like this (assuming you use 192.168.0.0/24 network)

    nmap -sP 192.168.0.1-254 > myNetwork.txt
    

    Now you can start to document.

    From JGurtz

Emails from custom domain going to spam

I just bought a custom domain that I'm trying to use as my primary email address. However at the moment many of my emails are landing in people's SPAM accounts. Are there any cheap ways around this problem (no-spam lists at internet providers or that sort of thing)? I'm not a spammer, I'm a fourth year college student.

  • Maybe you need to set up Reverse DNS? It's a common anti-spam technique to check the domain names in the rDNS.

    From rlovtang
  • You'll have to find out why they are going into their spam folders. Depending on what filtering system they use, the scoring might get added to the email header. Ask they forward you the email as an attachment, then view the original.

    If you are lucky, you'll see something like this:

    X-Spam-Flag: YES
    X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on
        pro-12-gl.savonix.com
    X-Spam-Level: ******
    X-Spam-Status: Yes, score=6.2 required=4.0 tests=BAYES_99,HTML_MESSAGE,
        MIME_HTML_ONLY shortcircuit=no autolearn=disabled version=3.2.5
    

    In this case, the BAYES_99,HTML_MESSAGE,MIME_HTML_ONLY spamassassin rules caused it to get classified as spam. The other answer mentioning rdns also has a rule, but many ISPs simply block emails sent from an IP without an rdns, or from an IP with an rdns that doesn't match the name of the SMTP server.

    From Docunext
  • There are a few ways to approach this, none with 100% certainty, but if you try several you'll at least improve your delivery rate. I'm going to assume you have control of DNS for your domain. This is not an exhaustive list.

    • Have an MX record for your domain, so you're able to accept responses.
    • Set up an SPF record for your domain. This isn't too hard if you know the sources of all email (e.g. you'll always be sending through the same subset of SMTP servers). Microsoft encourages submitting your record to them after setting up SPF, which helps you get through to Hotmail.
    • Check to see if your sending IP address is on any blacklists. Get it off if you can; this process is different for nearly all of them, though.
    • If you can, set up DKIM authentication.

    The last part is just to send "good-looking" email, and this is going to vary a lot depending on how you're creating your messages (in a webapp or in your own desktop client). Docunext's answer is an example of how to approach that problem. This includes things like having the reply-to and from headers match, not using blank subject lines, etc.

    From pjmorse

Need to have the home page redirect to a different document root with apache.

I want to have anything other than www.example.com and example.com directed to a different root directory.

so example.com and

example.com/page will be going to two seperate roots.

i need to do this so i can have the home page directed elsewhere while under development.

here is what my vhost for the site looks like.

ServerName www.example.com

ServerAlias *.example.com



DocumentRoot /var/www/example   

  • Try this on your .htaccess file at /var/www/example:

    RewriteEngine on
    RewriteCond %{HTTP_HOST} !www.example.com$ [NC]
    RewriteCond %{HTTP_HOST} ^(www.)?([a-z0-9-]+).example.com [NC]
    RewriteRule (.*) %2/other_folder/$1 [L]
    

    Which would redirect anything but www.example.com and example.com to another folder.

    OR

    To actually have a different DocumentRoot you would need to create a new VirtualHost, you could try for example:

    <virtualhost *:80>
        ServerName www.example.com
        ServerAlias example.com
        DocumentRoot /var/www/example
    </virtualhost>
    
    <virtualhost *:80>
        ServerName anything.example.com
        ServerAlias *.example.com
        DocumentRoot /var/www/another_example_root_folder
    </virtualhost>
    

    So your first virtualhost would take only with and without www and the second anything else.

    : I tried the first method you mentioned with no luck i ended up getting this fixed by doing it in the php thanks anyways
    Prix : @user52177 Yes you can do that with a php aswell, anyway glad you got it working you might want to add your php code on a new answer and mark it as the solution to your question as it may also help others with the same problem. http://meta.stackoverflow.com/questions/5234/how-does-accepting-an-answer-work/5235#5235
    From Prix

all tcp connection through socks proxy

Is it possible to route all my connections to internet to go through a sock proxy in a RHEL 4 machine. I need to connect to a remote MySql server using socks proxy.

Thanks.

  • you can use ssh to build a socks5 proxy for your mysql ports, or use something like squid for a complete socks5 proxy solution on your rhel4 machine.

Making a socks proxy using PHP

Using ssh, I can use another computer as a proxy, for ex.

$ssh -D 9999 username@ip-address-of-ssh-server

Now, I can configure my application to use socks proxy on port 9999 at the above given up address.

If I own a web server, can i make a php script which also listens from connections on a particular port (preferably access provided through password)?

  • This isn't something that can really work over a HTTP connection. Most HTTP servers are made to serve short lived connections and a SOCKS proxy is going to need the connection to stay open.

    From carson
  • Hi,

    I need your help, how you'll configure a PHP application use a particular port or connect through a socks proxy. I need to access a MySql server via socks proxy using PHP. Please help me if you know how to do this.

    Thanks, Karthik

    From Karthik
  • If your server has shell access, you can do this. However, SSH tunneling is not a great way to pass traffic, as it can cause issues with TCP windowing. If you are just looking for a secure proxy to get your data out, I'd look at VPN services instead of webhosting.

SQL Server 2005 TempDB Maintenace

I am working with a SQL Server 2005 database, which has the 8 files in 1 filegroup for the TempDB. The initial size of the 1st file is 8MB, and the other 7 are 2GB. This database is a reporting DB, which is populated nightly from an SSIS package. The package and reports use a lot of temp tables.

The files has grown to consume about 300GB, evenly distributed. It is set to grow by 200MB unrestricted. The TempDB is not backed up and is on a SAN.

I have read that you should not use SHRINKDATABASE or SHRINKFILE on TempDB. What is the proper way of performing maintenance in this situation to ensure that we do not max out disk space and keep TempDB lean and mean.

Thanks for any advice and knowledge.

  • Here's article 307487 from Microsoft about just that.

    It comes down to a few basic ways:

    1. Restart the SQL instance
    2. Use DBCC SHRINKDATABASE
    3. Use DBCC SHRINKFILE

    All of those have their issues as you are aware but I'm not sure of any way to neatly perform this task as you're essentially dealing with one of the most important databases in the instance. If you can afford to shut down the instance for a few minutes that would be the best bet I'd think.

    Keep in mind if your tempdb has grown this large in the first place there's a good chance it will get there again. If this is a major issue you should investigate why tempdb is growing so large and plan accordingly. The main reason people cry foul on the shrink operations is because databases tend to grow as big as they need to be unless something is being done wrong. It may not apply to you but it's just a general disclaimer I give out for any questions regarding database shrinking.

    Dustin Laine : I read that article, and none of these things can be done while online. Is SQL restart the typical way of maintaining this?
    Dynamo : SQL restart restores TempDB to it's initial size just because it clears it out of all the temp data. I'd be hard pressed to call any of the methods "typical" and more a case of find which one would work best for your situation.
    Dustin Laine : I guess what I am trying to get at is this: Does the tempdb growth need to be looked at or does the shrinking, by restart need to be routine?
    Dynamo : Definitely check out the growth. Use sp_spaceused to see how much of that space is not being let go. If your normal operations are causing it to grow to 300GB then that's one thing. If it's unable to release the space then it might be a bigger problem. Tempdb should grow to how big it needs to be and then release that space. The next time it needs to grow it should already have the necessary space allocated. If all it ever does is constantly grow you might have a problem that needs to be remedied before a shrink will do you any real good.
    Dustin Laine : When looking at the shrink dialog it states 99% available space. The SP show tempdb 12474.69 MB 4929.66 MB for size and unallocated. But the files are 200 GBs now.
    From Dynamo

Transform data to a new structure

Hi, I've got an Access database from one of our clients and want to import this data into a new MSSQL Server 2008 database structure I designed. It's similar to the Access Database (including all the columns and so on) but I normalized the entire database.

Is there any tool (microsoft tools preferred) to map the old database to my new design?

thanks

  • Built into Access is the SQL Upsize Wizard. You can use that - read up on it for all the details, but you should be good.

    Tony Toews : SSMA is signifcantly better than the SQL Upsizing Wizard built into Access.
    David W. Fenton : A few days ago I posted my observations on using SSMA (http://www.accessmonster.com/Uwe/Forum.aspx/databases-ms-access/44398/SSMA-for-Access-4-2-Observations -- it's not on Google Groups for some reason), and today discovered it failed to upsize one of the relationships, even thought that relationship was present in the source database. So, you need to check the results REALLY CAREFULLY.
    From mfinni
  • You could look at using SSIS (part of SQL Server) which is a comprehensive set of tools for doing ETL.

    From Chris W
  • The current tool of choice is SQL Server Migration Assistant (for the appropriate source database, i.e., comes in Access, Oracle, etc. flavors). But it replicates your Access database structure from scratch, rather than importing the data into your pre-built database.

How do I enable ZIP/TAR/BZ2 Download Buttons in my Mercurial HGWEB?

I recently got Mercurial running on my server, shared via Apache. When I browse to the repos via the web, I see a list of my repositories with Atom/RSS links, but no download buttons.

My question is, how do enable the purple "ZIP", "TAR", "BZ2" download buttons (example: http://hg.pablotron.org/)? I've been trying to find documentation for this, but must be looking in all the wrong places.

I am running Mercurial v1.6.3 on Ubuntu 10.04 with Apache 2.2.14. Thx!

  • From the hgrc manpage, put the following in your configuration file:

    [web]
    allow_archive = bz2, gz, zip 
    
    caseyamcl : Thx! Sometimes the obvious eludes...
    From tonfa

How can I make a Windows 2008 R2 machine have more than 1 hostname?

Have a small network of Windows XP machines set up where everyone's files are on a machine named M99. Recently, I've built a Windows 2008 R2 machine, and would like to move the files to it, but...this server is named FMS. All the other computers have configuration pathnames that would have to change from \\M99\whatever to \\FMS\whatever, and I don't want to spend all day (or days) making this change. What I would like to do (for now), is just change the name of the M99 computer, and somehow make the FMS computer look like it's M99. This fix will solve things nicely until the larger project of re-working everything into an AD domain is completed. How can I make FMS take on M99's identity?

  • Create your shares and move the files as you are planning, then do one of the following based on what your name resolution mechanism is:

    1) create a DNS alias (CNAME) record. The name of the record would be M99 and the machine it points to would be FMS. Then M99 would resolve in DNS to FMS.

    or

    2) create host file entries on your workstations for M99 with the IP address of FMS.

    Note both of these solutions assume that the FMS server is actually not available on the network as FMS anymore so you'll either have to completely take it offline or rename it.

    You will also have to disable strict name checking on the FMS server, otherwise connections to the alias will fail (ie- it'll only accept connections using the FMS name and both).

    Create a new DWORD and set the value to 1, then reboot the server to make it active.

    HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\DisableStrictNameChecking 
    

    See this TechNet article for more info.

    Scott : Option 1 is not an option as we do not use DNS yet, just simple NETBIOS B-type name resolution. I considered option 2, but am trying to avoid a solution that requires me to do anything on the clients. One interesting thing you stated was that I could not refer to the server as FMS for either of these options. That puzzles me. Why not? I would need the new server to still be FMS because there are new shares that use the FMS name. Surely there's a way to have the FMS server respond to NB name resoution broadcasts for both FMS and M99...
    squillman : @Scott I guess another option you might be able to get away with is using WINS, but I feel dirty even saying that and throw up in my mouth any time I do. Also, regarding not using the FMS name, you won't be able to unless you make the registry edit that I mention. You'll be all good with whatever names you want if you make that registry edit, otherwise you'll only be able to connect to shares using the computername you assign to it.
    From squillman

Linux: Checking the date a group was created

Is there a way I can check the date that a linux group was created and/or modified? It would be even better if I could pull the last user to modify the group.

  • Assuming we're talking local files here (not LDAP) and no additional auditing software, you're pretty much limited to the metadata of /etc/group; you can see when the file was last modified, but not by whom or which group(s) was affected.

    Urgoll : James, Prix, you are assuming that the groupadd / groupdel commands are used. /etc/group format is trivial, and this file is often simply edited by hand. Also, process accounting is a heavyweight solution as it will have a system-wide performance impact. If complete group auditing is required, it might be better to implement LDAP and using the auditing facilities of the LDAP server. Then use an automated process such as tripwire to ensure the /etc/group isn't modified.
    From Urgoll
  • You can see the last commands using the lastcomm for that you must have acct enabled, by adding the follow to your init script:

    # Turn process accounting on. 
    if [ -x /sbin/accton ]
    then 
            /sbin/accton /var/log/pacct 
            echo "Process accounting turned on." 
    fi
    

    To create the accounting record file:

    touch /var/log/pacct
    chown root /var/log/pacct
    chmod 0644 /var/log/pacct
    

    One thing i can recommend you to do is to alter both your groupadd and groupdel, move it somewhere else and create 2 bash scripts that will store the user that summoned it, the time and the command and after that it will call the actual scripts to create the groups or deleted them.

    A small sample: mv /usr/sbin/groupadd /usr/sbin/new_groupadd

    Now create a new /usr/sbin/groupadd with the follow content (dont forget to chmod it after youre done):

    #!/bin/bash
    
    echo "`date` - $USER - /usr/sbin/new_groupadd $@" >> /var/log/group_log
    /usr/sbin/new_groupadd $@
    

    Create the record file:

    touch /var/log/groupadd_log
    chown root /var/log/groupadd_log
    chmod 0644 /var/log/groupadd_log
    

    Well pointed by James Lawrie look in /var/log/secure and all it is rotated files (if the entry is too old already) to find out about when it was last changed, but it will not list if you give users other then root access to add groups.

    From Prix
  • Just look in /var/log/secure, I created and modified a group as an example. Please note that the command may not relate to the last session opened, so could be difficult to tell who actually did it:
    Aug 30 20:38:09 aladdin su: pam_unix(su-l:session): session opened for user root by james(uid=0)
    Aug 30 20:38:15 aladdin groupadd[2442]: group added to /etc/group: name=test, GID=501
    Aug 30 20:38:15 aladdin groupadd[2442]: group added to /etc/gshadow: name=test
    Aug 30 20:38:15 aladdin groupadd[2442]: new group: name=test, GID=501
    Aug 30 20:39:03 aladdin groupmod[2450]: group changed in /etc/group (group test/501, new gid: 502)
    Aug 30 20:39:03 aladdin groupmod[2450]: group changed in /etc/passwd (group test/501, new gid: 502)
    And yes, my machine is called aladdin - what of it?

    Dennis Williamson : And your password is "opensesame"! ;)

how do i create a 1 line command using "sc start" or "net start" on services whose names match a pattern?

I am creating a batch file to start several services, instead of enumerating each "net start" with the exact service name, how can i create the script to run "net start" on any service that begins with "EED"

Thanks.

  • A FOR-IN-DO loop is one way you could do this.

    FOR %%x in ("Service 1" "Service 2" "Service ...") DO net start %%x
    

    The syntax may vary, depending on the service name and optional parameters. Of course, this doesn't meet your requirement to enumerate the list based on service names that start with "EED." You will have to list each service specifically, or use more complex code to get that done. Type FOR /? at a command prompt for more information on the extensive options that this command provides.

    From Sam Erde
  • Here is a batch file to do exactly that:

    @Echo Off
    for /f "tokens=1,2" %%i in ('sc query') do if "%%i"=="SERVICE_NAME:" call :Process %%j
    Goto :EOF
    
    :Process
    set @Name=%1
    if "%@Name:~0,3%"=="EED" (SC start %1
    Echo %1 Started)
    
    :EOF
    

    If you want to change the "prefix search" change the line that says:

    if "%@Name:~0,3%"=="EED" (SC start %1
    

    The prefix to search for is the "EED" and you have to make sure that you change the length number, which is the "~0,3" part... if you want all services that start with "Exchange" then change that number to "~0,8"

    HTH,

    Glenn

    Glenn Sullivan : I just realized that this is not a one line command... sorry. but it is possible to make this a generic batch file that would work like "StartServicesWithPrefix EED" but it would take a bit more work. Glenn
  • If you're open to using PowerShell instead of a batch file, this one liner will fix you up.

    Get-Service EED* | Start-Service

    : this worked perfectly
    tony roth : or wmic service where "name like 'eed$'" call startservice
    From Ben

MS Office 2010 Licensing requirements in on Terminal Server

I have a client that would like to upgrade MS Access on his TS to 2010. From the research I have done, it appears that he needs to actually buy a $160+ copy of MS Access for every single user that logs onto the Server? This seems a little ridiculous to me. Prior to 2010 it was possible (although I'm not sure about legal) to install a retail version of any of the Office suite and away you went!

I just today tried installing a retail version of MS Access on their server and was greeted with a nice little message saying that I couldn't install this on a Terminal Server.

I am aware of the Runtime edition of Access and maybe that is the route we will have to go. I could install the full version on a client computer and use it there.

Do I really need to purchase a license for every single user?

  • You need a volume license copy of any Office application or suite in order to install it on a terminal server.

    As far as the number of licenses, Office is licensed per device, not per user. You need a license for each device that will be accessing the terminal server. If you have devices that already have licenses for Access, you do not have to repurchase them for the terminal server but they need to be licensed to use the version that is installed on the server.

    It is probably best to bring up these licensing types of questions with your supplier or Microsoft. They will have much more authoritative information than anybody here is able to provide you.

    Chris S : +1, This information is correct; and you need to contact a MS Licensing Partner to buy the volume license anyway, so it's best to talk over your specific situation with them before buying.
    Icode4food : Thanks for the input. I'm not exactly an IT pro. (just trying to be for this one company that doesn't want to pay someone that actually knows) Your answer is basically what I was expecting and answers all the question I have. Thanks.
    From Jason Berg
  • There are two moving parts to licensing for running Office apps on Windows Terminal Server:

    1. the client workstation license

    2. the WTS CAL

    The first rule is that workstations running Terminal Server sessions have to have a license to the software they are running. Before Office 2007, this was very, very loosely enforced, but with Office 2007 (and 2010), things have been tightened up significantly. For one, you have to install the Enterprise version of Office on the Terminal Server. Secondly, it really won't allow a connection if the client workstation doesn't have the appropriate licenses.

    The WTS CALs control how many connections to the Terminal Server, independent of what apps are being run. They are just like your standard CALs for any Windows Server software. They can be assigned per user or per device. It used to be that the licensing software worked reliably with one and not reliably with the other, but I've forgotten the details (it was more than 5 years ago that I encountered the issue).

    If you're trying to support users who don't have Access or Office installed on the devices they are connecting to the Terminal Server from, then you are better off engineering your Access application to use the runtime on the Terminal Server, because that eliminates any software licensing issues (though not the CAL requirement).

how can I "unlink" files stored in SVN repo

I've got two sites in my local www folder, say site1.com and site2.com.

They are both stored in my SVN repo on the dev server, and I use tortoise SVN.

I've copied some files from one of the sites to another, but SVN now has these files linked together in the repo.

E.g. if I update main.css on site1 and commit it, when I update main.css on site2, it gets the changes I made in site1.

How can I unlink these files again?

  • I resolved the issue I was having by doing an export of the working copy to another folder, then removing the site from the svn and re-importing it from scratch.

    Still hoping there is a better way to resolve the problem though (I now know to use export if I want to copy a file or a directory from a working copy).

    From jsims281
  • How did you copy the files from site1 to site2 ? If you use the "svn copy" command line, it should do what you want (copy the current version of the file to a new location but from now on they have separate lifes).

    Alternatively, you can copy the files normally and then do a 'svn add', though that is less efficient.

    If you do the second technique and copy whole directories, know that you will also copy svn's metadata and that is sure to cause problems.

    Now to unlink the files, I would guess that a 'svn remove' of one of them would do the trick. Then re-copy it as explained above.

    From Urgoll

CouchDB like replication for MySQL?

Is it possible to setup CouchDB like replication for MySQL?

  • Replication to be initiated from an web application.
  • Replication should be a two way process. i.e. Synchronization
  • If there is a failure in network connection, replication should take over where it left.
  • Schronization should be incremental.
  • The kind of data stored is Invoice.
  • Syncronization should be atomic. Either it should copy the whole invoice OR nothing.

Should I go for an custom synchronization logic here? I am planning to use Hibernate for data storage.

  • You can configure MySQL in Master-Master mode in an active-passive configuration. One server will serve as the master while the other one stays in sync as a slave of the master (plain MySQL Replication is asynchronous in nature, btw). The secondary acts as the master of the primary, but since it's not being written to, nothing actually gets written back to the master in this case. If the primary fails, you can start using the secondary one as the main master (ie: point your app to it). Eventually when you fix the master, assuming all the data and configuration was intact, you can re-establish replication and it can pick up where it left off.

    Check out the Multi-Master Replication Manager project that can help you achieve this:

    http://mysql-mmm.org/

    Good luck!

    : My goal is to use it for offline distributed application. Will Multi-Master replication help?
    vmfarms : No, MySQL does not support distributed configurations. You can always shard your data, but this is extremely complex and generally not recommended. MySQL is much more like a RDBMS that CouchDB is.
    vmfarms : And also, MySQL Replication can't be setup from a web interface. It needs to be configured on the server directly.
    From vmfarms