Tuesday, January 25, 2011

MySQL problem with many concurrent connections

Hi, here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served. NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections.

[error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7

I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?

  • It's possible that, if you connect 10,000 times at once, 1500 of those 10,000 will be accepted and the remaining 8500 will be rejected (as in, will not be delayed until the persistent connection is available). Then, subsequent requests will use the 1500-large persistent pool.

    Is there a problem with opening the persistent connections once when you start up the server?

    Borealid : @cuti123: so why don't you set up the persistent connections before starting the application for real? The problem may be that your DB server is limiting the maximum rate of opening new connections.
    From Borealid
  • I wrote, that the "resource temporarily unavailable" error occurs every time, already when using only some hundred non-persistent connections. Now, I've added


    before mysql connect call in my php script, so that this script sleeps for a random time (0.5-1.5 seconds) before connecting to mysql. And tadaaa, the problem doesn't occur with non-persistent connection. So could it possibly be, that MySQL has a problem with massive concurrent thread creation?

    Borealid : Or maybe it's that MySQL (sensibly) rate-limits your connections to protect itself? See http://mysqlha.blogspot.com/2007/08/rate-limiting-for-innodb.html
    : I don't think so, because the problem is already the connection to MySQL and not a query. At this point it doesn't have to do something with InnoDB (IMHO).
  • This sounds like maybe connections aren't being cleaned up in a timely manner. "show processlist" should show the connections if that's the case. Looking at the MySQL documentation, it appears this may be common with PHP unless certain PHP parameters are tweaked. You may want to look through the article and comments at http://dev.mysql.com/doc/refman/5.0/en/too-many-connections.html

    Another possibility brought up by the very slow TCP connections is perhaps MySQL (or tcpwrappers) is trying to do hostname lookup for access and the lookup is slow. This seems unlikely when using unix sockets, but in case it's trying to match localhost or system hostnames it may be worth looking into whether any access rules could be moved to IP or removed.

    From Jeremy M
  • Thanks for your replies. For me it looks like it's a problem with concurrent thread creating by MySQL. Because when 1500 persistent connections are established no more error occurs and these connections can be used by 1500 scripts at one time, so I don't believe in a problem with parallel access to socket or a TCP problem.

    To sum up

    • ulimit=32768, max_connections=2000, php_fastcgi_children=1500
    • execution of 1500 concurrent requests with non-persistent connections always fails with this "Resource temporarily unavailable" (only a small portion of request is handled)
    • when adding random delay of 0.5-1.5 seconds before connect to database (with non-persistent connections) this problem completely disappears
    • when using persistent connections, this problem occurs all the time until all php fastcgi children have established their persistent connection to database (so when mysql processlist shows less than 1500 connections, this failure occurs still, but not so often)
    • when using persistent connections and all php processes have established a connection to db, the failure doesn't occur anymore
  • what OS is this running on top of? Windows has a terribly long time out on TCP connections. Something on the order of 90 seconds. You have to make some registry edits to bring back down to something shorter. Also helps to raise the port limit of 10k all to something higher.

    : I'm using Debian Lenny. The following additional settings were made: echo 131072 > /proc/sys/net/nf_conntrack_max echo 131072 > /proc/sys/net/netfilter/nf_conntrack_max echo 3 > /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_time_wait echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse But remember, all interprocess communication is made via sockets, so these settings doesn't affect this.
    From aduljr
  • Have you also adjusted the max_user_connections parameter? It could be that nginx is treated as a single user, and is capped by that parameter.

    From blacklotus

Possible to IPSec VPN Tunnel Public IP Addresses?

A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload.

IPSec Payload: source: destination: IP Packet: source: destination:

Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

  • Of course it's possible. Technically-speaking, there's nothing different about public IP addresses versus RFC1918 addresses other than that fact that RFC1918 addresses have been reserved for private use.

    In your IPsec config, you can match and tunnel public addresses just like you can RFC1918 addresses.

    From ErikA
  • Yes, we tunnel from our corporate network to public IP range in the datacenter. This way, we can allow all traffic from the corp vlans to the datacenter, while restricting flow from the outside to our data centers.

    IBM may not allow tunneling to your private network, since the latter may conflict with their internal networks. For example, the printer you want to connect is in the, a corp vlan. If the SAS product is in some vlan, say, which is managed by IBM networking team, they are not going to tunnel the traffic from to, even though such a IPSec tunnel is possible. If they create such a tunnel, they will have problems in the future to make use of their own VLAN.

    The best option for you to is this:

    1. Create a tunnel between IBM and public IP range of your company.
    2. Create a NAT and/PAT between publicIP:port to printerIP:port
    From RainDoctor
  • is it possible to NAT a private IP address(inside) to a public IP address (outside) and then send the public IP through IPSec tunnel?

    From karwan

How to setup nginx + thin + multiple domains + multiple virtual hosts?

I have a VPS. I create websites with Rails. I have several domains.

Now I want to install nginx + thin, and configure them to support multiple hosts, let each host have a seperated domain.

I have searched a lot, and now nginx and thin are installed and running, but I don't know how to combined them togetther.

Is there any articles or documents can help me? Thanks in advance.

Apache redirect by without URL rewriting

Is there a way I could achieve redirecting to another site, without losing the original URL? So, what we'd want to achieve is to be able to access:
somesite.com/some/url -> someothersite.com/some/url
but we'd like to retain somesite.com/*.
In other words, we'd want to access every URL of someothersite.com via somesite.com.

  • If you're trying to redirect just a subdirectory of the server, you can use mod_rewrite like this:

    RewriteCond %{REQUEST_URI} /some/url [NC]
    RewriteRule ^(.*)$ other.example.com/$1 [R]

    If your trying to show the content of the other site without the user seeing the other url, you'll need both mod_rewrite and mod_proxy

    RewriteCond %{REQUEST_URI} /some/url [NC]
    RewriteRule ^(.*)$ http://other.example.com/$1 [P]

    Other configuration steps may be necessary for mod_proxy, depending on your environment.

    Apache's Advanced mod_rewrite Guide may be of benefit.

    From Chris S
  • I'd just throw in a file that simply redirects the user with an HTML meta-refresh, or if you want to redirect the user in the most accepted way possible, throw in a 301 redirect rule in an .htaccess file somewhere on the site.


    Redirect 301 /some/url http://newsite.com/some/url
    Chris S : A meta-refresh would affect the www.example.com page as well; and you'd have to add it to every single page in the whole site for it to work correctly. Adding a redirect in an .htaccess file wouldn't work if there's no default record for example.com.
    Christian Paredes : Hm, good point. I've probably read the question wrong: I see that he wants to redirect EVERY URL from somesite.com to someothersite.com; however, this could use some clarification.
    pugmarx : Added clarification as per your inputs. :)
  • Are you actually looking for a redirect, or to have that other site's content appear under your original URL (more proxying)? I'll assume that the other server is your content and either internal/backend or if public you're aware of the negative SEO implications. If someothersite.com is not your site and content, you probably shouldn't be doing this.

    You'll need to have mod_proxy and mod_rewrite running, which is likely uncommenting the lines mentioning it if not already done.

    Then in the somesite.com config:
    RewriteEngine on
    RewriteRule /some/url(.*) http://someothersite.com/some/url$1 [P,L]

    From Jeremy M

Mysterious timeouts in MSSQL 2000

I have a query that has been working fine for ages (2+ years).

All of a sudden it went from taking 3 secs to 30+ to execute. The table indexes have minimal fragmentation, but rebuilding the indexes "fixes" it for about a day.

When I go to check the fragmentation it is around .5 (not bad). The table only sees about 100 inserts and another 100 updates a day.

There are no locks on the table when it happens. Any ideas of what to check?

  • Check if your Statistics is Out-of-date and update it accordingly.

    gbn : index stats are updated as part of an index rebuild.
    From DaniSQL
  • What does the execution plan look like when it is fine, vs when it isn't working correctly?

    Are you rebuilding the indexes or defragging the indexes? How big is the table? When was the last time you updated stats on the table?

    Christopher Kelly : table has a grand total of 4 integer columns and ~20000 rows
    Farseeker : +1 for execution plan
    mrdenny : How much data is changing on the table daily? It is really sounding like the stats are out of date. Can you post the execution plans?
    From mrdenny
  • It's probably parameter sniffing (StackOverflow search).

    The index rebuild implicitly rebuilds index statistics which invalidates the cached plan which "fixes" it.

    From gbn

404 on new install of Glassfish

I am BRAND NEW to Glassfish, so I thought I would install a copy and poke around (I'm actually looking at a number of Java servers for personal edification). So, I installed a basic copy of Glassfish 3 on Ubuntu 10 and java-6-openjdk.

My first run through, I tried using sudo ./glassfish-3.0.1-unix.sh. Unfortunately, when I called ./asadmin start-domain and then navigated to the admin panel (at the default location), I found only this (there is more HTML on the page, but this is the only text. I can provide the entire HTML doc if needed.):

<div id="copyright" style="width: 720px; margin-left: auto; margin-right: auto; padding: 5px;"> 
        <span>Copyright © 2005, 2010, Oracle and/or its affiliates. All rights reserved.  U.S. Government Rights - Commercial software. Government users are subject to the Oracle standard license agreement and applicable provisions of the FAR and its supplements. Use is subject to license terms. This distribution may include materials developed by third parties. Portions may be derived from Berkeley BSD systems, licensed from U. of CA. Oracle, Java and GlassFish are registered trademarks of Oracle and/or its affiliates.</span> 

Also, the form with the ID "loginButton" is clearly missing from the HTML.

So, I uninstalled that version of Glassfish and installed using sudo ./glassfish-3.0.1-web-unix.sh. This had the same error. Finally, I tried installing it without using sudo, but I ran into the same error.

How do I get the stupid admin page working?

  • Have you tried it with the real version of java rather than openjdk?

    Christopher W. Allen-Poole : That worked immediately. Thank you.
    From JamesRyan

How do I edit Network Solution's DNS settings to allow no-www?

I have a domain registered through Network Solutions, foo.com. I have it setup to point to a Blogger account I have setup. I followed the instructions at:

It loads http://www.foo.com fine.

But when I goto http://foo.com, it redirects to a subdomain that no longer exists (example: http://abc.foo.com). I think it's from an old DNS setting I had setup at 1&1 hosting a while ago. I no longer have access to my 1&1 account, so I can't view what DNS settings were in place.

How do I edit the DNS Settings at Network Solutions so that http://foo.com also points to where http://www.foo.com also points to?

  • DNS and HTTP are completely separate protocols.

    HTTP handles redirects. So if the url is changing (redirect) it's in the HTTP Server configuration.

    DNS handles A & C-Name records; and the default record for foo.com must exist as it redirect. Sounds like DNS is setup correctly (probably).

    From Chris S
  • I ended up adding a CNAME record. So whenever http://abc.foo.com occurs, it redirects back to http://www.foo.com.

    Host Aliases (CNAME Records)
    Alias a domain name for another domain.
    Alias                 TTL      Refers to Host Name
    abc.foo.com         7200     www.foo.com 

    This seems to have fixed my problem for me.

    Chris S : This is **not a redirect**. Unless you have code in your HTTP configuration that redirects all non www prefixed traffic, it'll just be another way to access you're site. Users will see abc.foo.com; not www.foo.com.
    : I don't know anything about code in an HTTP configuration since this the site is hosted by Blogger (blogspot.com). Also, I'm not sure I'm using the term 'redirect' correctly. Previous to this CNAME addition, http://foo.com loaded the url http://abc.foo.com, which doesn't exist, a Google error page occurs. Now typing in http://foo.com brings up http://www.foo.com. I'd rather it be http://foo.com, but I think Blogger requires www.

webmail suite recommendation

Hello, I have serveral emails in a few domains (email@domain1, email@domain2, email@domain3).

Currently they are on an owned email server and I am collecting emails via IMAP protocol (i would not like to use POP..) in Thunderbird.

I have a few partners and I want to allow them to access the same email address.

Here is what I desired:

All users can open All the inboxes via IMAP @ Thunderbird (with proper configuration)

at the same time, there are a webmail system, every user can login their account (userA, userB, userC),

and they will see all inboxes (email@domain1, email@domain2, email@domain3)

Would you recommend any suite that fits my needs?

Either (a system to be installed on my server) or (a remote service where I need to config MX records) will do.

Thank you.

  • You could consider citadel. Check out its list of features. Sounds like exactly what you need.

    From sybreon
  • I'm too running my own mail server and have tried numerous web mail suites.
    What I recommend you is the roundcube. It's configurable and sexy.
    My setup is running on top of dovecot and postfix.

  • I love my SquirrelMail, it's got very few requirements, a simple interface, and gets the job done with minimal fuss. If you're looking for "oh shiny!" look elsewhere, this is a workhorse.

    Support for simultaneous multiple mailboxes isn't there yet, it's being worked on in the development branch. You can configure it to use multiple backend servers, but you can only login to one at a time currently.

    From Chris S

running remote debian bash file from windows bat file

hi, i have two machines:
1) windows xp prof., where i have a Exec.bat which takes one parameter
2) debian machine, where i have a Run.sh which also takes one parameter

Now, currently i am switching between machines and doing the following:
Run.sh 1
Exec.bat 1
Run.sh 2
Exec.bat 2

My question is how is it possible to run the Run.sh which takes one argument from inside the Exec.bat, or otherway would also be good as a solution. thanks!

  • more info needed.

    what do the scripts do ? consider implementing the windows script on linux.

    From frisbee23
  • Install an ssh server on one machine and a client on the other machine. It's easier to set up the server on Linux (just install the ssh server package) and the client on Windows (you can install putty, or a port of unix utilities such as Cygwin). Set up public key authentication so you can log in through ssh without having to type a password each time (the putty manual, or any of a zillion tutorials on the web, explain how).

    Then run your commands from a batch file as follows:

    ssh linux.example.com run.sh 1
    call exec.bat 1

    Another possibility is Winexe.

    From Gilles
  • Take a look at Plink, from the makers of PuTTY. It is used for automated operations such as running commands through scripts like this. I use it frequently to write scripts, it is very straightforward:

    PuTTY Link: command-line connection utility
    Release 0.58
    Usage: plink [options] [user@]host [command]
           ("host" can also be a PuTTY saved session name)
      -V        print version information and exit
      -pgpfp    print PGP key fingerprints and exit
      -v        show verbose messages
      -load sessname  Load settings from saved session
      -ssh -telnet -rlogin -raw
                force use of a particular protocol
      -P port   connect to specified port
      -l user   connect with specified username
      -batch    disable all interactive prompts
    The following options only apply to SSH connections:
      -pw passw login with specified password
      -D [listen-IP:]listen-port
                Dynamic SOCKS-based port forwarding
      -L [listen-IP:]listen-port:host:port
                Forward local port to remote address
      -R [listen-IP:]listen-port:host:port
                Forward remote port to local address
      -X -x     enable / disable X11 forwarding
      -A -a     enable / disable agent forwarding
      -t -T     enable / disable pty allocation
      -1 -2     force use of particular protocol version
      -4 -6     force use of IPv4 or IPv6
      -C        enable compression
      -i key    private key file for authentication
      -m file   read remote command(s) from file
      -s        remote command is an SSH subsystem (SSH-2 only)
      -N        don't start a shell/command (SSH-2 only)
    From RHELAdmin

-bash: ./flume: No such file or directory BUT flume is there and works elsewhere

We have a product built on CentOS 4 32-bit Linux that runs unmodified on 32- and 64-bit CentOS/RHEL 4 and 5 and SLES 10. It also runs unmodified on SLES 9 64-bit. [SLES 9 32-bit requires a different libstdc++.

The name of the main binary executable is 'flume'

Yesterday we tried to put this on 64-bit Ubuntu 10 and, even though the file is there and the right size, we get:

-bash: ./flume: No such file or directory

file flume shows it to be a 32-bit ELF (can't remember the exact output and the system is on an isolated network)

If put into /usr/local/bin, then which flume /usr/local/bin/flume

I did not try ldd flume yet.

I now suspect that some library is not there.

This is a profoundly unhelpful message and one I have never seen before.

Is this peculiar to Ubuntu or perhaps just to this installation.

We gave up and moved to a RHEL 4 system and everything is fine. But I sure would like to know what causes this.

  • Did you try to give execute rights to the file?

    chmod +x ./flume 
    basvdlei : That would give "Permission denied" instead of "No such file or directory"
    SvenW : Yep, you are right.
    lcbrevard : Sorry, I should have made clear that the mode bits showed as executable - i.e., chmod +x flume was done at the very beginning. I also checked with lsattr to see if anything was funny there but that was OK too. I have not had a chance to try ldd or strace because the server room had an air conditioning failure and everything is powered off!
    From SvenW
  • You can get this message if flume exists but its “loader” doesn't exist, where

    • the loader of a native executable is its dynamic loader, for example /lib/ld-linux.so.2;
    • the loader of a script is the program mentioned on its shebang line, e.g., /bin/sh if the script begins with #!/bin/sh.

    In your case, it looks like you don't have the 32-bit dynamic loader installed on the 64-bit Ubuntu system. It's in the libc6-i386 package.

    strings ./flume | head -n 1 will display the path to the dynamic loader that flume requires. This is one of those rare cases where strace ./flume is completely unhelpful.

    I consider this situation to be Unix's most misleading error message. Unfortunately fixing it would be hard: the kernel can only report a numeric error code to the caller of the program, so it only has room for “command not found” and not for the name of the loader it's looking for.

    From Gilles

Powershell unable to "see" dnscmd.exe

Hi Folks,

Am at a complete loss with this one. Recently a number of my powershell scripts have started failing as they are unable to find the command dnscmd.exe.

What has me at a loss is that the executable exists and works and I can run it just fine in the command prompt. I have tried the following in powershell to run the command:

  • dnscmd
  • & dnscmd
  • & dnscmd.exe
  • & c:\windows\system32\dnscmd.exe

All return "The term dnscmd is not recognized as the name of a cmdlet, function,script file or operable program...."

Can anyone enlighten me as to why powershell is completely unable to see the command, where the normal command prompt/windows explorer etc.. can? Using powershell 2.

  • What do you get when you execute:

    Get-Command dnscmd.exe -CommandType Application

    Ben Short : Get-Command : The term 'dnscmd.exe' is not recognized as the name of a cmdlet, function, script file, or operable progr am. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
    Ben Short : Just a link to a screenshot - http://dl.dropbox.com/u/8408796/dnscmd.JPG Confirmed that c:\windows\system32\ is in the powershell path.
    From Shay Levy
  • This sounds like the work of the File System Redirector to me. Are you running on a 64bit OS?

    For example, when you make a call to C:\Windows\system32\dnscmd.exe on a 64bit OS using PowerShell (x86), this will be redirected to C:\Windows\SysWow64\dnscmd.exe. The redirection occurs at the point the folder is accessed so even though you are explicitly providing the path to the executable, you'll still be diverted. This can be disabled on the current thread via calls to native Windows APIs or can be avoided through the use of certain variables and aliases.

    More information here: http://msdn.microsoft.com/en-us/library/aa384187%28VS.85%29.aspx

    From fletcher
  • Its the File System Redirect, but you can bypass

    Instead of


    (which the redirector will grab) use

    Ben Short : oddly the Windows 7 & Windows Server 2008 machines I am testing this on do not recognise the SysNative alias. Any thoughts on why this may occur? :)
    Ben Short : Scratch that - found out it was because I was running 64 bit Powershell, and not 32, which meant the Wow64 layer was not kicking in.
    From Taylor

Which RPM fixes this failed RedHat dependency?

I'm installing an RPM on RedHat Enterprise Linux 5 and it fails with the following error:

error: Failed dependencies:
    rpmlib(PayloadIsLzma) <= 4.4.6-1 is needed by vspd-1/43-rsmu.i586

Which package contains rpmlib? I've tried:

yum install rpmlib //didn't exist
yum install rpm //already installed
yum install rpm-devel //not the right one

I've looked around for the answer to this question and all I've found is...
...people looking for the answer to this question!

Like on this forum and a dozen others. I've successfully installed this RPM (VSPDL) before so I know the answer exists and someone probably knows it. I just don't remember which package magically fixed the dependency.

All suggestions are appreciated,


  • I hope this helps whoever it is that favorited this question,

    Finally got google to give me better results. Per this forum and this experts exchange response (scroll to the very bottom),

    It's not a problem of rpm versions--it's just flat-out not supported: The RPM has been produced with LZMA payload compression which is not supported by RHEL (CentOS).

    My only option is to compile what I'm trying to install, from source (which is available in a tarball) or to use a different rpm. Although the rpm I'm using says it's for "Redhat, Madriva, SuSe, Ubuntu, Gentoo," it's not actually for the Enterprise version of RedHat.

    I'm going to try my luck with the RedHat 9, i386 RPM and if that doesn't work, I'll compile the source.

    From gmale

VMware RHEL5 guest OS cached memory release

From everything I have read, RHEL will use all available/unused RAM for caching to increase performance.

However, this becomes an issue for me in a VMware environment. VMware (ESX4) looks for guest OS to let it (ESX) know when the guest is done using the memory so it will make it available it to another server.

So lets say I have a guest that has a dB, the dB runs an intensive job, it finishes and now its idle. But since it cached all the memory, VMware still thinks that the dB server needs it, so it doesnt release the RAM to be used by my webserver that needs the extra RAM. Sure I can live with as many guests as I have RAM, but that is a waste of a huge virtualization feature, since over all my CPU utilization is under 10% and my RAM utilization (as far as VMware knows) is 75%.

Any ideas?

  • One mechanism to reclaim memory from the DB server VM in your example would be the VMware Balloon driver (which is part of the VMware Tools):

    This is VMware physical memory management driver which acts
    like a "balloon" that can be inflated to reclaim physical pages
    by reserving them in the guest and invalidating them in the
    monitor, freeing up the underlying machine pages so they can
    be allocated to other guests. The balloon can also be deflated
    to allow the guest to use more physical memory.

    See e.g. "3.3 Ballooning" in Understanding Memory Resource Management in VMware® ESX Server (pdf).

    You could also run this little script

    sync && echo 3 > /proc/sys/vm/drop_caches

    in the DB server VM to explicitly free pagecache, dentries and inodes if you are sure you don't need the caches anymore.

    From knweiss

SQL Server 2008 - Secure remote access

I wish to allow remote access to an SQL Server database to a Microsoft Access database to allow my client to generate reports as and when they wish.

I can do this without any issue by opening the relevant port in the Windows Firewall on the server. This however seems a very insecure way of allowing access and my server is alarmingly being targeted by someone (or more than one person) trying to guess the ‘sa’ password.

While this password has been set very strongly according to published guidelines, I would like to prevent this from happening.

What are the best practices surrounding allowing access to the server in this way?

Many thanks, Anthony

  • Don't use SA or Rename the SA account.

    Anthony : Very fair comment, although sa isn't actually used for by anything/one to access the server, more for any recovery that must take place. I will rename it but it that wouldnt stop login requests using that username and my log files will still grow.
    From DaniSQL
  • You should not open a service directly to the public internet like this if you can possibly help it.

    I suggest using some form of VPN which would provide proper security (fully encrypted traffic and so on), will only allow those with a key for the VPN to even see that the SQL server exists, and may even speed things up by adding compression to the mix. It doesn't need to cost anything either: OpenVPN is free (and OSS), stable and reliable - we use it all the time for similar things.

    If you don't want a full VPN then install a SSH server on your machine and let your client connect through that by using its tunnelling feature to forward connections to port 1433. There are a number of ports of the full OpenSSH set for Windows, so again this solution is F+OSS.

    If your client really can't cope with the extra hassle of a VPN or SSH client, then there isn't much you can do. If they have a fixed IP address you can remove login attempts from other sources by limiting the firewall' "accept port 1433 connections" rule to that address. If they have a dynamic address but from a fixed range (i.e. they always connect from the same ISP) then you can at least limit the rule's scope to that range of addresses.

    If your client is reluctant to use a VPN or SSH tunnelling, you can "sell" them the solution by extolling the virtues of decently encrypted traffic (that direct connection will be sending data in plain text), compressed traffic (this could be significant if they run reports that output many rows), and in the case of something like OpenVPN a more reliable connection (OpenVPN is more resilient to a drop-and-reconnect or other network blip than a direct connection will be). Having SQL server in any for directly addressable by the public network is generally considered a bad idea.

  • It seems like SQL Reporting Services would be perfect here. I could be totally off base, but it seems like it'd be handy to expose reporting to your client (on a separate server, if possible) without putting your DB server in the line of fire.

    Anthony : Reporting services is currently in use however the client would like to author their own reports. Allowing them access to the Report Builder involves relaxing yet more permissions which I’m reluctant to do.
    Kara Marfia : Sounds like a reasonable reluctance!
    jl : Consider a port other than 1433 for SQL Server. That, like the sa login that is found with Mixed Mode security is well known to those wanting to hack. Be sure that the port has not been designated for use by another protocol.

HP ProCurve Port Mode Configuration Question

We have a ProCurve Switch 2810-48G (J9022A).

We need to disable auto negotiation on two ports and manually configure them to be full duplex gige ports.

From the web GUI, Configuration Tab, Port Configuration sub tab, I am only presented with the option to configure the port as Auto - 1000. I take this to mean, auto negotiate duplex, manually configure the speed to be gige.

How do I manually configure the port such that it is manually configured to use full duplex, 1000 mbs?

  • Use the command line interface instead. When you're logged in as a manager, use the "config" command, then set the parameters for the ports you want (I just used c1,c2 as the example):

    Procurve# config
    ProCurve(config)# interface ethernet c1,c2 speed-duplex 1000-full

    Manuals for the 2810 are here, the Management and Configuration Guide covers the web, menu, and command line interfaces in detail.

    James : +1, just do it from the command line!
    SvrGuy : The trouble is, when I try to set 1000-full from the CLI (as described) I get the following error: Value 1000-full is not applicable to port 39 The exact commands I have entered are: config interface 39 speed-duplex 1000-full BTW: speed-duplex auto-1000 works.
    From Ward
  • Here is the semi-official word from HP. Although meant for the 3500 and 5400 series switches, based on the language it sounds like it would apply to all HP products:

    Q: Why can't I configure the ProCurve 3500yl/5400zl 10/100/1000 Base-T ports for 1000 Full-Duplex operation?

    The full-duplex setting is not required by the IEEE 802.3ab 1000 Base-T standard. In order to be compliant with the standard, devices must be able to auto-negotiate to 1000 full-duplex when the port is configured to "auto". If your device does not operate at 1000 full-duplex with the switch port set to "auto", check your device driver and cable, or contact technical support for your product(s).

    From Dave

Missing Active Directory Account for Mailbox

We have an Exchange 2007 environment running on an Server 2008. I recently found a mailbox in the Exchange Management Console pointing to an Active Directory account, but the user account isn't showing up in Active Directory User and Computers snap-in. The mailbox is used for a monitoring computer and it is able to log in (I found the password for it). How can a mailbox exist in Exchange 2007 without an Active Directory Account? And now can the user log in to access the mailbox without an Active Directory Account?

This one has me scratching my head.


  • There has to be an account. Exchange aside, if you're logging-on as the user there's an AD account.

    Try a query with the dsquery tool and see what you find:

     dsquery user -samID account-name

    That should return the DN of the user object in question.

    Edison : Thanks for this. I was able to find the user in a buried OU within AD. Thanks again.

Which database (DBMS) can best utilize multiple cores and more memory?

The databases that I am interested in are: SQL Server 2008, MySQL, and PostgreSQL 9.0.

In general, I am wondering which of those would "scale-up" the best. I have read that PostgreSQL used to scale-up better than MySQL, but that the difference has shrunk with newer versions of MySQL.

In addition to general information, I am also looking for advice for my specific situation:

I have a 64-bit SQL Server 2008 R2 Developer Edition database with 20 years of stock data and 2 years of options data. The hardware is Intel i7 Extreme with 6 cores, 12 GB RAM, 64-bit Windows 7.

The database is fairly large and intense calulations, such as moving averages, can take as long as 7 minutes. Furthermore, there's considerable disk activity even after the query is compete. I'm doing all the calculations in stored procedures right now.

My system is too slow and I am trying to improve its performance and efficiency. I am currently improving my data model and tweaking the software setup. Any suggestions?

Also, when should someone consider using MySQL Cluster? (Since I'm asking, I'm sure the answer is, "Not yours!")

  • My system is too slow and I am trying to improve its performance and efficiency.

    • Too little memory.

    • And, most important - like most people not knowing really about databases - you talk a lot about cores and RAM (and Win 7 - get rid of it and install a Windows Server, please), but totally ignore the one thing most important for database performance: DISCS. How many discs do you run?

    As example, I run a Futures database - and my SQL Server has a 6 Velociraptor set JUST for the data, and 2 more discs for tempdb and logs. This is on a SAS infrascturcture with a hardware RAID controller. And I am not sure I like the IO performance ;)

    Furthermore, there's considerable disk activity even after the query is compete

    • Too little RAM
    • Normal behavior. Transacitonal databases (and pevalculating moving averages is that) is always disc heavy. Normal computers suck for datbases for this exact reason. There is a largesection in the documentation how SQL Server (is forced to use) uses discs.

    Get discs - or better SSD - to give you a powerfull disc subsystem.

    JohnB : Transacitonal databases: but I'm not doing any *transactions*, in fact, only reads. Also, I have already considered more discs and partitioning, but I haven't gotten to that stage yet. Thanks!
    JohnB : Why is Windows Server better than Win7?
    phuzion : Windows Server is better than Windows 7 because it is built to focus on the performance of background services (Database, web, DNS, DHCP, etc) over the performance of foreground apps (MMC, IE, notepad, etc). There are other reasons, but that is the primary reason to choose a server OS over a desktop OS for a server application such as a database.
    From TomTom
  • You're eventually going to hit a brick wall in terms of performance if you rely on stored procedures for large datasets. If you need faster response times, you probably want to look at offloading these calculations from the DBMS.


    I made the incorrect assumption that you were talking about some sort of transaction system where it is difficult to make changes to the data model. I work at the big enterprise where doing that to is very difficult. You can also "offload" calculations by doing them once, ahead of time.

    Before you do anything, you should study the query plans very carefully and understand which queries are using the most resources and why. Think about what you're actually doing -- with the example of computing moving averages, consider that you're referencing historical data that doesn't change. If you need to plot the 52-week moving average of IBM from 1982-1992... why compute it on demand? Do it in advance! Storage capacity is generally cheap -- IOPS and CPU is generally expensive.

    Another thing that I've seen happen is that folks over-normalize (because that is taught as the "right thing to do). Sometimes, especially for static data, de-normalizing improves performance by eliminating joins.

    JohnB : I like your suggestion. Can you elaborate on when it's best to offload calculations from the DBMS? Birger claims that it's best to bring as much load as possible to the database: http://stackoverflow.com/questions/1749453/access-against-mysql-or-postgres/1750215#1750215
    duffbeer703 : I think Birger is right in the majority of situations. You need to consider a number of things, how much you need to scale, budget, etc. Even with unlimited funds, you still need to think really hard about what you're trying to do. I updated my answer with some other ideas as well.

Can you combine dhcp dynamic dns updates and static IPs in the same Bind zone?

I'm in the process of configuring ISC dhcpd and Bind9 on SLES Linux in our office for internal DNS. I'd like to put everything in one zone, city.domain.tld for example. I can get the dhcpd server to update the zone, but if I add static records to the zone (a new file server with it's ip for example) Bind complains about the journal no longer matching.

Assigning fixed IPs via dhcp to the servers is not a solution because dhcpd doesn't send updates to the DNS for fix IPs.

I could use two separate zones, one with static IPs and one for the dynamic updates, but I'd rather not have to go all the work stations (80ish) and add the second search suffix.

Workstations are a mix of Windows, Linux and MAC. No domains or active directory.

Any suggestions? Is deleting the journal and letting DHCP re-update the dns the only solution? Searching Google didn't turn anything up.

2010.7.14 Update:
Here are the version numbers:

  • bind-9.3.2-17.15
  • dhcpcd-1.3.22pl4-223.2
  • SLES 10 i386

Upgrading to a newer version of Bind/dhcpd is not out of the question if it solves the problems.

  • Are you running a recent version of bind later then 9.3.0?

    If so you can use the command rndc freeze zone.tld to freeze the zone. Once it is in this state dynamic update requests will be rejected and you can manually edit the zone database. Use the command rndc unfreeze zone.tld to return the zone to a point that will allow dynamic updates. You do need to update the serial number if you manually make changes.


    Phil P : It's `thaw`, not `unfreeze`, but this is right.
    David : Agreed. Seems the docs and the actual command differ. Bv9.3 ARM says "unfreeze" but rndc's help command says "thaw".
    From Zoredache

How to change the sender of admin mails on a Linux server


I am sure it is a dumb question, but I want to change the sender of mails sended by my Linux server (centos 5). All mails sent to user "dev" are forwarded via a /home/dev/.forward file, with this sender : dev@host.epiconcept.fr.

host.mydomain.fr is the hostname of this server, but it's an internal server and it does not even have a public IP. How to have a sender like admin@mydomain.fr? I try to change the "official domain name" (line begining by "Dj" but it change nothing).


  • Normally for this i use generic maps from postfix (imho one of the best - if not the best- mailserver for *nix).


    On centos If i remember correctly the default smtp daemon is sendmail.
    In this case i'd suggest you to install postfix (much more easy to manage).

    From Pier
  • The actual address used as the sender is determined by the application doing the sending, unless you decide to get the MTA involved for some reason. See the documentation of the specific application to discover where to change it.

    Cédric Girard : I think the sending is done by the MTA, because mails are sent to user root and forwarded to another adress because of a /root/.forward file.
  • If you need only to send mail from services and do not need to receive incoming mail or have local mailboxes, I'd rather use ssmtp than postfix or some other heavyweight mail system.

    The whole config of ssmtp fits in just couple of lines and it has an easy and straightforward way to rewrite the sender's address.

    From Erthad

How Can We Create Blackbox Logs for Nginx?

There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data."

The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc.

LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox

I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names.

access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"'
                    's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent'

Here's a table of the variables I've been able to map from the Nginx documentation.

%a = $remote_addr - The IP address of the remote client.
%S = $remote_port - The port of the remote client.
%X = ? - Keep alive status.
%t = $time_local - The start time of the request.
%r = $request - The first line of request containing method verb, path and protocol.
%s = ? - Status before any redirections.
%>s = $status - Status after any redirections.
%{pid}P = $pid - The process id.
%{tid}P = N/A - The thread id, which is non-applicable to Nignx.
%T = ? - The time in seconds to handle the request.
%D = $request_time - The time in milliseconds to handle the request.
%I = ? - The count of bytes received including headers. 
%O = $bytes_sent - The count of bytes sent including headers.
%B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'.

Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

  • Nginx is not like Apache -- the power of Nginx is in (1) server/location matching and URI rewriting, (2) proxying/failover requests (to HTTP/FastCGI upstreams) and (3) caching of the result. So you may face real problems somewhere in these areas, when you need to measure how your backend service responds and which of the requests are cached. I doubt there can be any universal "blackbox" format for Nginx, since it has a lot of variables and moreover they are dynamic (i.e. you can write the value of cookie "Registered" and corresponding cache status to the log in order to see whether registered users get cached content or not)

    Nevertheless, it is very useful to extend short "combined" log format.


    Some hints on the general variables:

    $uri - URI after rewrites
    $args - Query arguments
    $realpath_root - Value of "root" for this request
    $server_name - the name of server which is processing the request
    $connection - the number of connection


    Some useful variables when proxying. These variables may contain serveral values: separated with , when upstreams where asked in turn and separated with : when X-Accel-Redirect has been issued:

    $upstream_addr - the upstream IPs
    $upstream_response_time - the upstream processing times
    $upstream_status - the upstream statuses


    One variable useful to log caching status:

    $upstream_cache_status = MISS | EXPIRED | UPDATING (stale answer used) | STALE (stale answer used) | HIT
    Alan Gutierrez : Why not? There's a universal log format for hit counting, "combined." Why can't there be one for performance? Then you could have people write scripts to report on the default performance log format, which could be extended or supplemented as your application needs dictate. Convention versus configuration. Chris Josephes' article was a nice contribution to Apache. It was nice that someone thought about the data you'll want to have when *unanticipated* performance or security issues arise.
    From Alaz